Ten Little Algorithms, Part 6: Green’s Theorem and Swept-Area Detection

Jason Sachs June 18, 20172 comments

Other articles in this series:

This article is mainly an excuse to scribble down some cryptic-looking mathematics — Don’t panic! Close your eyes and scroll down if you feel nauseous — and...


Round Round Get Around: Why Fixed-Point Right-Shifts Are Just Fine

Jason Sachs November 22, 20163 comments

Today’s topic is rounding in embedded systems, or more specifically, why you don’t need to worry about it in many cases.

One of the issues faced in computer arithmetic is that exact arithmetic requires an ever-increasing bit length to avoid overflow. Adding or subtracting two 16-bit integers produces a 17-bit result; multiplying two 16-bit integers produces a 32-bit result. In fixed-point arithmetic we typically multiply and shift right; for example, if we wanted to multiply some...


Padé Delay is Okay Today

Jason Sachs March 1, 20164 comments

This article is going to be somewhat different in that I’m not really writing it for the typical embedded systems engineer. Rather it’s kind of a specialized topic, so don’t be surprised if you get bored and move on to something else. That’s fine by me.

Anyway, let’s just jump ahead to the punchline. Here’s a numerical simulation of a step response to a \( p=126, q=130 \) Padé approximation of a time delay:

Impressed? Maybe you should be. This...


Ten Little Algorithms, Part 2: The Single-Pole Low-Pass Filter

Jason Sachs April 27, 201512 comments

Other articles in this series:

I’m writing this article in a room with a bunch of other people talking, and while sometimes I wish they would just SHUT UP, it would be...


Understanding and Preventing Overflow (I Had Too Much to Add Last Night)

Jason Sachs December 4, 2013

Happy Thanksgiving! Maybe the memory of eating too much turkey is fresh in your mind. If so, this would be a good time to talk about overflow.

In the world of floating-point arithmetic, overflow is possible but not particularly common. You can get it when numbers become too large; IEEE double-precision floating-point numbers support a range of just under 21024, and if you go beyond that you have problems:

for k in [10, 100, 1000, 1020, 1023, 1023.9, 1023.9999, 1024]: try: ...

Signal Processing Contest in Python (PREVIEW): The Worst Encoder in the World

Jason Sachs September 7, 20136 comments

When I posted an article on estimating velocity from a position encoder, I got a number of responses. A few of them were of the form "Well, it's an interesting article, but at slow speeds why can't you just take the time between the encoder edges, and then...." My point was that there are lots of people out there which take this approach, and don't take into account that the time between encoder edges varies due to manufacturing errors in the encoder. For some reason this is a hard concept...


Adventures in Signal Processing with Python

Jason Sachs June 23, 201311 comments

Author’s note: This article was originally called Adventures in Signal Processing with Python (MATLAB? We don’t need no stinkin' MATLAB!) — the allusion to The Treasure of the Sierra Madre has been removed, in deference to being a good neighbor to The MathWorks. While I don’t make it a secret of my dislike of many aspects of MATLAB — which I mention later in this article — I do hope they can improve their software and reduce the price. Please note this...


Oscilloscope Dreams

Jason Sachs January 15, 20125 comments

My coworkers and I recently needed a new oscilloscope. I thought I would share some of the features I look for when purchasing one.

When I was in college in the early 1990's, our oscilloscopes looked like this:

Now the cathode ray tubes have almost all been replaced by digital storage scopes with color LCD screens, and they look like these:

Oscilloscopes are basically just fancy expensive boxes for graphing voltage vs. time. They span a wide range of features and prices:...


Adventures in Signal Processing with Python

Jason Sachs June 23, 201311 comments

Author’s note: This article was originally called Adventures in Signal Processing with Python (MATLAB? We don’t need no stinkin' MATLAB!) — the allusion to The Treasure of the Sierra Madre has been removed, in deference to being a good neighbor to The MathWorks. While I don’t make it a secret of my dislike of many aspects of MATLAB — which I mention later in this article — I do hope they can improve their software and reduce the price. Please note this...


Ten Little Algorithms, Part 2: The Single-Pole Low-Pass Filter

Jason Sachs April 27, 201512 comments

Other articles in this series:

I’m writing this article in a room with a bunch of other people talking, and while sometimes I wish they would just SHUT UP, it would be...


Understanding and Preventing Overflow (I Had Too Much to Add Last Night)

Jason Sachs December 4, 2013

Happy Thanksgiving! Maybe the memory of eating too much turkey is fresh in your mind. If so, this would be a good time to talk about overflow.

In the world of floating-point arithmetic, overflow is possible but not particularly common. You can get it when numbers become too large; IEEE double-precision floating-point numbers support a range of just under 21024, and if you go beyond that you have problems:

for k in [10, 100, 1000, 1020, 1023, 1023.9, 1023.9999, 1024]: try: ...

Oscilloscope Dreams

Jason Sachs January 15, 20125 comments

My coworkers and I recently needed a new oscilloscope. I thought I would share some of the features I look for when purchasing one.

When I was in college in the early 1990's, our oscilloscopes looked like this:

Now the cathode ray tubes have almost all been replaced by digital storage scopes with color LCD screens, and they look like these:

Oscilloscopes are basically just fancy expensive boxes for graphing voltage vs. time. They span a wide range of features and prices:...


Signal Processing Contest in Python (PREVIEW): The Worst Encoder in the World

Jason Sachs September 7, 20136 comments

When I posted an article on estimating velocity from a position encoder, I got a number of responses. A few of them were of the form "Well, it's an interesting article, but at slow speeds why can't you just take the time between the encoder edges, and then...." My point was that there are lots of people out there which take this approach, and don't take into account that the time between encoder edges varies due to manufacturing errors in the encoder. For some reason this is a hard concept...


Padé Delay is Okay Today

Jason Sachs March 1, 20164 comments

This article is going to be somewhat different in that I’m not really writing it for the typical embedded systems engineer. Rather it’s kind of a specialized topic, so don’t be surprised if you get bored and move on to something else. That’s fine by me.

Anyway, let’s just jump ahead to the punchline. Here’s a numerical simulation of a step response to a \( p=126, q=130 \) Padé approximation of a time delay:

Impressed? Maybe you should be. This...


Round Round Get Around: Why Fixed-Point Right-Shifts Are Just Fine

Jason Sachs November 22, 20163 comments

Today’s topic is rounding in embedded systems, or more specifically, why you don’t need to worry about it in many cases.

One of the issues faced in computer arithmetic is that exact arithmetic requires an ever-increasing bit length to avoid overflow. Adding or subtracting two 16-bit integers produces a 17-bit result; multiplying two 16-bit integers produces a 32-bit result. In fixed-point arithmetic we typically multiply and shift right; for example, if we wanted to multiply some...


Ten Little Algorithms, Part 6: Green’s Theorem and Swept-Area Detection

Jason Sachs June 18, 20172 comments

Other articles in this series:

This article is mainly an excuse to scribble down some cryptic-looking mathematics — Don’t panic! Close your eyes and scroll down if you feel nauseous — and...