This is in response to Vladimir Post 35 from 'DSP engineer position opening in Vienna / Austria'. I posted it twice in the normal manner and it never appeared on comp.dsp. Vladimir, There is no 5-page guide that covers everything you need to know to do integer implementations. So if you can read and understand a 5-page guide in a day you still can't learn to do integer implementations in a day. There are too many problems that will be unique to applications, processors, tools, and resource limitations to cover them ALL in a guide. The solutions are usually not solvable with recipes, they require knowledge experience, judgement, understanding of what constitutes acceptable performance (may be subjective).>From your comments that you taught 3 courses in integer arithmeticwhere you "pointed the general direction and answered some questions, that's about it." You did not teach them how to do integer implementations. Integer arithmetic is just the starting point. At best, you taught them enough to be dangerous, if they try to apply it to a project of any complexity and don't continue on and learn the other 95% of what they need to know. It takes a certain kind of personality to have much chance of doing this right most of the time. A lot of engineers don't have the drive to learn the details, learn the possible mistakes, try to avoid the mistakes, find remaining mistakes, fix the mistakes. It can get pretty tedious. The same can be said of other engineering specialties, so this isn't unique to this topic. I have worked with engineers with Masters and PhD. degrees that understood the integer math, but did not understand the impact on their integer implementations. A lot of engineers have no idea how to test their integer implementations. If it is so simple, give me definitive answers to the "simple" questions posed by me in my discussions with R B-J. You can't, because while some of the questions have definite answers, others do not. You have to decide what solution will suffice for your application within the constraints of the processor architecture (word size, accumulator size, instruction set, ...), memory available, allowable processing load, what constitutes acceptable performance. As these constraints get tighter the problem gets harder. Often you have no libraries to fall back on, so mathematical function have to be implemented accordingly using integer operations. Often not trivial. To answer your question "BTW, do you really think anybody would like to learn how to do math in the integer?", the answer is 'yes'. Several types of situations immediately come to mind. I am sure others here can add additional ones. 1) FPGA designers who for one reason or another (dollar cost, time cost, gate count cost) can't use floating-point. 2) DSP designers who must build low power, small footprint miniature systems using processors with low clock rates, small word sizes, probably coded in assembly language. 3) DSP designers who have to fit their code into existing integer DSPs, with or without the luxury of programming in a high-level language. 4) People doing DSP on controllers. 5) Consumer items where chip cost demands an integer solution. Have you ever actually done a non-trivial integer implementation of a complicated processing that did not lend itself to an integer implementation? Maybe some complex audio processing, or a DSP based radio? I would guess not, from what you have said here. Please correct me if I am wrong, I'd like to hear about the challenges you faced. As for your successful new grad, either the project was non- challenging numerically or you got a great hire (give him/her a raise). Regarding the managers deciding when something is complete, when I have done DSP, the managers usually did not understand DSP, much less integer implementations. They were paying me for my expertise. The job was not complete until I was happy with the job I had done. I make it a point to give them what they need, even if it isn't what they want (I have frustrated my share of managers and program managers, but most have ended up happy with the results). Sometimes that has meant they didn't get what they wanted when they wanted it (if it was feasible to not give it to them), or they got a version to work with while I continued to work on it until I was satisfied. As far as the "big bucks", when I was doing consulting/ contracting I often earned more than the managers did. I take responsibility for what I do. So much discussion for such a "simple" topic. Dirk
Re: Vladimir Post 35 about Integer Implementations: WAS: DSP engineer position opening in Vienna / Austria
Started by ●May 11, 2007
Reply by ●May 11, 20072007-05-11
dbell wrote:> > Vladimir, > > There is no 5-page guide that covers everything you need to know to do > integer implementations.Dirk, There is no limit for the sofistication. However the necessary minimum of the integer arithmetics can be explained in five pages. And a sensible person should learn that in one day.> So if you can read and understand a 5-page > guide in a day you still can't learn to do integer implementations in > a day. There are too many problems that will be unique to > applications, processors, tools, and resource limitations to cover > them ALL in a guide.An engineer does not have to know ALL at once. Everything consists of a small building blocks like FIR, IIR, PLL, PID, etc. Once you got the main idea of what the blocks are, it is very easy to implement them using any tools and hardware. To me, the primary thing is the understanding how it should work; the implementation is not a problem.> The solutions are usually not solvable with > recipes, they require knowledge experience, judgement, understanding > of what constitutes acceptable performance (may be subjective).Yes. As Stroustrup said, there is no substitute for the common sense, intelligence, good taste and experience.>>From your comments that you taught 3 courses in integer arithmetic > where you "pointed the general direction and answered some questions, > that's about it." You did not teach them how to do integer > implementations. Integer arithmetic is just the starting point. At > best, you taught them enough to be dangerous, if they try to apply it > to a project of any complexity and don't continue on and learn the > other 95% of what they need to know.Isn't it your job as a leader to give them a piece of work which is adequate to their level? After 5 pages /1 day learning, they are capable of implementing, for example, a biquad section with a given precision, or a rotation of a 3d object on the screen.> It takes a certain kind of personality to have much chance of doing > this right most of the time. A lot of engineers don't have the drive > to learn the details, learn the possible mistakes, try to avoid the > mistakes, find remaining mistakes, fix the mistakes. It can get pretty > tedious. The same can be said of other engineering specialties, so > this isn't unique to this topic.On the other side, there is a very dangerous and addictive habit to stuck to the minor technical details instead of seeing the picture of the project as a whole. Many good professionals are prone to this pernicious habit.> I have worked with engineers with Masters and PhD. degrees that > understood the integer math, but did not understand the impact on > their integer implementations. A lot of engineers have no idea how to > test their integer implementations.Unfortunately, the advanced degree can only be the measure of assiduousness. There is no direct relation between the performance of a person and the educational degree.> If it is so simple, give me definitive > answers to the "simple" questions posed by > me in my discussions with R B-J. You can't, because while some of the > questions have definite answers, others do not. You have to decide > what solution will suffice for your application within the constraints > of the processor architecture (word size, accumulator size, > instruction set, ...), memory available, allowable processing load, > what constitutes acceptable performance. As these constraints get > tighter the problem gets harder. Often you have no libraries to fall > back on, so mathematical function have to be implemented > accordingly using integer operations. Often not trivial.Dirk, Do you remember how to do the calculations using the sliding rule? I am pretty sure there used to be a lot of fine art about it. The point I am trying to make is that this sort of knowledge is unessential. Quite soon they will release a PIC-16 processor with the native support for the double floating point, and this will render all of the tricks unnecessary. Thus, I am trying to think of what is required to do this, not about how this can be done.> > Have you ever actually done a non-trivial integer implementation of a > complicated processing that did not lend itself to an integer > implementation?Yes. I did several quite complicated projects in integer and in the assembler. It worked fine. However now I regret about that, and I feel a pity thinking of wasted time and effort.> Maybe some complex audio processing, or a DSP based > radio? I would guess not, from what you have said here. Please correct > me if I am wrong, I'd like to hear about the challenges you faced.You are awfull wrong :-) I do know about the integer calculations very well. To me, the most difficult problem with the integers is the uncontrollable bit growth in the complicated calculations with vectors or matrices, especially if it includes raising to the N-th power. In this case, it is not enough to make it work for the worst case; there should be separate branches for different cases or some other tedious stuff. VLV
Reply by ●May 12, 20072007-05-12
On May 11, 12:55 pm, dbell <bellda2...@cox.net> wrote: ...> I have worked with engineers with Masters and PhD. degrees that > understood the integer math, but did not understand the impact on > their integer implementations. A lot of engineers have no idea how to > test their > integer implementations. If it is so simple, give me definitive > answers to the "simple" questions posed by > me in my discussions with R B-J. You can't, because while some of the > questions have definite answers, others do not.one thing that i meant to mention (and forgot) was that even though you are right here about those questions, some are pretty tough without definite answers, you *can* usually write code for the fixed- point or integer machine to avoid those icky consequences of overflow, and put in the saturation only where it is needed. having enough bits bits in the word and enough guard bits in each accumulator register and doing prudent things when you either quantize on the right or overflow/saturate on the left. you can write code and scale things so that you will never have an absolute value that comes out as 0x80000000. bits are pretty cheap these days, no?> You have to decide > what solution will suffice for your application within the constraints > of the processor architecture (word size, accumulator size, > instruction set, ...), memory available, allowable processing load, > what constitutes acceptable performance. As these constraints get > tighter the problem gets harder. Often you have no libraries to fall > back on, so mathematical function have to be implemented > accordingly using integer operations. Often not trivial. > > To answer your question "BTW, do you really think anybody would like > to learn how to do math in the integer?", > the answer is 'yes'. Several types of situations immediately come to > mind. I am sure others here can add additional ones. > > 1) FPGA designers who for one reason or another (dollar cost, time > cost, gate count cost) can't use > floating-point. > 2) DSP designers who must build low power, small footprint miniature > systems using processors with low clock rates, small word sizes, > probably coded in assembly language. > 3) DSP designers who have to fit their code into existing integer > DSPs, with or without the luxury of programming in a high-level > language. > 4) People doing DSP on controllers. > 5) Consumer items where chip cost demands an integer solution. > > Have you ever actually done a non-trivial integer implementation of a > complicated processing that did not lend itself to an integer > implementation? Maybe some complex audio processing,the nastiest thing i can think of where i concede the field to the floating-point partisans is frequency-domain crap like sinusoidal modeling or phase vocoder or similar. doing FFTs of any impressive size in fixed-point is nasty and really requires at least block- floating-point. maybe LPC and Levison-Durbin. other than that, i am still a sorta fixed-point partisan. expecially in hardware implementations such as ASICs or FPGAs although i have seen a clever little psuedo-floating point format in such. probably my most non-trivial implementation of some alg on an integer machine was my very first. it was predicting the equilibrium state or steady-state of a first-order linear process. the math required multiplying, squaring, integration, and division (and binary to decimal conversion and back). and my processor was a Mot MC6800 (two zeros) so there wasn't even a multiply instruction. fortunately the bandwidth was extremely small and 10 Hz sampling rate was sufficient. i s'pose there are people making controllers with 8-bit PICs or something where they had to string together several bytes to make a decent word to do math with. r b-j
Reply by ●May 12, 20072007-05-12
On May 12, 1:49 am, robert bristow-johnson <r...@audioimagination.com> wrote:> On May 11, 12:55 pm, dbell <bellda2...@cox.net> wrote: > ...<snipped>> > one thing that i meant to mention (and forgot) was that even though > you are right here about those questions, some are pretty tough > without definite answers, you *can* usually write code for the fixed- > point or integer machine to avoid those icky consequences of overflow, > and put in the saturation only where it is needed. having enough bits > bits in the word and enough guard bits in each accumulator register > and doing prudent things when you either quantize on the right or > overflow/saturate on the left. you can write code and scale things so > that you will never have an absolute value that comes out as > 0x80000000. bits are pretty cheap these days, no? > ><snipped>> > r b-j- Hide quoted text - > > - Show quoted text -R B-J Bits are cheap only if you get to use them. So are instructions. Early this century (2001, I think) I worked with a couple of people on LPC for a radio where the audio processing was done on a 16-bit Analog Devices 2100 family DSP. I don't remember the clock speed but they typically ran around 30 MIPs. The DSP was picked for small footprint and low power, long before I showed up.The hardware was already built.The processor was doing other things besides the LPC. Neither bits nor instructions were cheap. At low clock rates the cycles add up fast.Think about a generic fixed- point one-cycle absolute value assembly language instruction. abs X0; To make it "clean" for 16-bit two's complement math you have to do something like take the absolute value test if the absolute value result is negative (in error) if the absolute value result was negative set the result to the maximum positive number Suddenly one instruction became several. If you do this enough times in a second the cost mounts. Might not be feasible. Same thing if you are doing a filter and the accumulator is not large enough to hold all possible results and you want to check for overflow. The processor may have no way to detect final overflow. The overflow of intermediate results (non-saturating 2's complement) is okay if the final result fits in the representable numeric range, but there is no way to look at the output to determine if a final overflow occurred. If you saturated in this case when an intermediate result overflow occurred you could be destroying an otherwise good final result. Of course if it barely overflowed, the result may be hugely in the opposite polarity of what you wanted. Scaling down either the data or the coefficients may not give adequate results.One possibility involves checks on intermediate results (only check after enough taps have executed that overflow may be possible) which probably takes more cycles per sample than the actual filtering. If you do this enough times in a second with a lot of taps, the cost mounts. There are other possibilities, but they may still be costly. I worked on an existing high-performance surveillance receiver in the early 90's that was so strapped for cycles(enhancements were being added faster than the clock rate on the processors increased, and no significant hardware changes were permitted) that if I added two more taps (really, two) to one of the many filters, the output started messing up (not high performance anymore) because the processor could no longer keep up. In this case there was little room for extra instructions to check for problem situations. These are examples of real-world issues. The solutions are not always"simple". What you can do conceptually to solve a problem, and what you can do practically to solve the problem may have little in common. Dirk
Reply by ●May 12, 20072007-05-12
robert bristow-johnson wrote:> the nastiest thing i can think of where i concede the field to the > floating-point partisans is frequency-domain crap like sinusoidal > modeling or phase vocoder or similar. doing FFTs of any impressive > size in fixed-point is nasty and really requires at least block- > floating-point.Even the simple thing like ax^2 + bx + c is already nasty in the fixed point, and requires using tricks to preserve the precision. When it comes to Ax^2 + Bx + C, this is a real sorrow.> maybe LPC and Levison-Durbin.From my experience, those are not too bad because there is generally no need for the high precision results. For the speech compression, somewhat 10 bit accuracy is good enough, thus you can afford loosing bits. Of course care should be taken about the possible ill-behaviour.> other than that, i am > still a sorta fixed-point partisan. expecially in hardware > implementations such as ASICs or FPGAs although i have seen a clever > little psuedo-floating point format in such.I am not a hacker any more, and I prefer to select the means which are the most adequate for a particular task.> probably my most non-trivial implementation of some alg on an integer > machine was my very first.My graduate work was in the processing of a radar signal. It was done in the integer C + i8086 assembler. That was mainly because I was young and stupid. After that I did a lot of silly projects like that.> it was predicting the equilibrium state or > steady-state of a first-order linear process. the math required > multiplying, squaring, integration, and division (and binary to > decimal conversion and back). and my processor was a Mot MC6800 (two > zeros) so there wasn't even a multiply instruction. fortunately the > bandwidth was extremely small and 10 Hz sampling rate was sufficient.So? Is everybody supposed to behold and admire? Lots of time and effort was spent. What did you gain for yourself?> i s'pose there are people making controllers with 8-bit PICs or > something where they had to string together several bytes to make a > decent word to do math with.With controllers, the accuracy is typically limited by sensors and actuators. There is rarely a need to do better then 1%. Also, the algorithms are as simple as PID, so the basic 16-bit math is enough in the most cases. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by ●May 14, 20072007-05-14
On May 12, 1:00 pm, Vladimir Vassilevsky <antispam_bo...@hotmail.com> wrote:> robert bristow-johnson wrote: > > the nastiest thing i can think of where i concede the field to the > > floating-point partisans is frequency-domain crap like sinusoidal > > modeling or phase vocoder or similar. doing FFTs of any impressive > > size in fixed-point is nasty and really requires at least block- > > floating-point. > > Even the simple thing like ax^2 + bx + c is already nasty in the fixed > point, and requires using tricks to preserve the precision. When it > comes to Ax^2 + Bx + C, this is a real sorrow.presently, i do higher order polynomials (usually they are either even or odd symmetry, so the number of terms is about half of the order) all the time with fixed-point arithmetic and a limited domain (-1 <= x < +1 ... application is waveshaping). it's not too sad.> > it was predicting the equilibrium state or > > steady-state of a first-order linear process. the math required > > multiplying, squaring, integration, and division (and binary to > > decimal conversion and back). and my processor was a Mot MC6800 (two > > zeros) so there wasn't even a multiply instruction. fortunately the > > bandwidth was extremely small and 10 Hz sampling rate was sufficient. > > So? Is everybody supposed to behold and admire?it's an example of what can be done with fixed-point when that is all that's available to you.> Lots of time and effort was spent. What did you gain for yourself?an M.S. degree. as well as hard-core experience in how to do sophisticated mathematical signal processing with better than 0.1% precision when all one has is a machine that can move, add, and subtract 8 bit numbers. and more hard-core experience on how to put something like this in a box with a power supply, an analog instrumentation amplifier, using a 12-bit D/A as a successive approximation A/D (these suckers were expensive in 1979), how to convert binary to decimal and decimal to binary cheaply and elegantly, and then how to do UI kinda stuff like dispatching commands from button pushes and converting to 7-segment display. for a 24 year old snot-nosed grad student, that was pretty valuable. waiting around for a DSP or some floating-point chip or even a chip with a built in multiplier was not in the cards. it's like what Jerry says: "Engineering is making the things you want with the things you have." the geriatrics professor i did this for gained a machine that he could use in his lab measuring the level of colloid in blood serum samples (by use of osmosis) without having to wait the 5 minutes (the time- constant of this 1st-order process was about a minute) for each sample. he was then able to process hundreds (maybe a 1000 or more, but i dunno) of samples instead of dozens and get better statistical confidence for whatever medical research he was doing. i dunno what he was trying to correlate, maybe increased colloid in blood means old patient is more likely to drop dead soon. beats me. r b-j
Reply by ●May 14, 20072007-05-14
robert bristow-johnson <rbj@audioimagination.com> wrote in news:1179160458.634125.93620@h2g2000hsg.googlegroups.com:> the geriatrics professor i did this for gained a machine that he could > use in his lab measuring the level of colloid in blood serum samples > (by use of osmosis) without having to wait the 5 minutes (the time- > constant of this 1st-order process was about a minute) for each > sample.Optimal sensitivity for the time constant (max of dy/dtau; output y, time constant tau) of a 1st order process is at one time constant, and is very small when you are 5 time constants out. I think the whole discussion line can be summed up by 1) There is no substitute for practical experience 2) There are times when understanding the guts of your algorithm at a fundamental level can really be a good idea, saving time and money. The best engineers might very well cover both bases. -- Scott Reverse name to reply
Reply by ●May 14, 20072007-05-14
On May 14, 12:49 pm, Scott Seidman <namdiestt...@mindspring.com> wrote:> robert bristow-johnson <r...@audioimagination.com> wrote innews:1179160458.634125.93620@h2g2000hsg.googlegroups.com: > > > the geriatrics professor i did this for gained a machine that he could > > use in his lab measuring the level of colloid in blood serum samples > > (by use of osmosis) without having to wait the 5 minutes (the time- > > constant of this 1st-order process was about a minute) for each > > sample. > > Optimal sensitivity for the time constant (max of dy/dtau; output y, time > constant tau) of a 1st order process is at one time constant, and is very > small when you are 5 time constants out.that (the time constant) kinda came out in the wash. the equilibrium pressure (the settling value of the 1st order process) was the parameter that was proportionaly to (or at least some increasing function of) the biochemical parameter of interest to this person. the formula implemented is depicted at: http://groups.google.com/group/comp.dsp/msg/21a4d9092a115e56?dmode=source nothing special happened at 1 time constant. what happened was that mathematically we were trading a predicted reading for increased error. with a model that there is some (gaussian) error in the input signal, the output (predicted) signal had an error that was larger than the input but was decreasing in time until about 5 time constants (where you could just read the input directly for the Peq). so if you wanted a predicted value, you had to pay for it with a noisier result. that seemed intuitively satisfying to me since it was a sorta "conservation of information" theorem.> I think the whole discussion line can be summed up by > > 1) There is no substitute for practical experiencewell, there *are*, but these substitutes are not very practical.> 2) There are times when understanding the guts of your algorithm at a > fundamental level can really be a good idea, saving time and money.or just making it possible. i can't make anything work without really understanding the guts of it. i dunno how else to debug the alg. without understanding the guts of it, how do i know, if i single-step through it, if some intermediate number or parameter is "correct"?> The best engineers might very well cover both bases.yup. r b-j
Reply by ●May 15, 20072007-05-15
robert bristow-johnson wrote: ...> probably my most non-trivial implementation of some alg on an integer > machine was my very first. it was predicting the equilibrium state or > steady-state of a first-order linear process. the math required > multiplying, squaring, integration, and division (and binary to > decimal conversion and back). and my processor was a Mot MC6800 (two > zeros) so there wasn't even a multiply instruction. fortunately the > bandwidth was extremely small and 10 Hz sampling rate was sufficient. > i s'pose there are people making controllers with 8-bit PICs or > something where they had to string together several bytes to make a > decent word to do math with.That's a tougher job than one I did with an 1802. Mine integrated the outputs of 1024 spectrometer channels using triple precision (24 bit) accumulators. The program was written using a two-pass assembler that used paper tape on an ASR33 TTY. One didn't want many do-overs for typos. Jerry -- Engineering is the art of making what you want from things you can get. ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply by ●May 15, 20072007-05-15
On May 15, 1:03 pm, Jerry Avins <j...@ieee.org> wrote:> robert bristow-johnson wrote: > > ... > > > probably my most non-trivial implementation of some alg on an integer > > machine was my very first. it was predicting the equilibrium state or > > steady-state of a first-order linear process. the math required > > multiplying, squaring, integration, and division (and binary to > > decimal conversion and back). and my processor was a Mot MC6800 (two > > zeros) so there wasn't even a multiply instruction. fortunately the > > bandwidth was extremely small and 10 Hz sampling rate was sufficient. > > i s'pose there are people making controllers with 8-bit PICs or > > something where they had to string together several bytes to make a > > decent word to do math with. > > That's a tougher job than one I did with an 1802. Mine integrated the > outputs of 1024 spectrometer channels using triple precision (24 bit) > accumulators. The program was written using a two-pass assembler that > used paper tape on an ASR33 TTY. One didn't want many do-overs for typos. > > Jerry > -- > Engineering is the art of making what you want from things you can get. > =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF==AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF= =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF IIRC, a company called Boston Systems made an 1802 cross-assembler that ran on the VAX. We use the chip in an ion mobility detector to control a grid allowing pulsed air samples into a drift tube and then integrate the output of a 12-bit A/D converter to give an amplitude vs. time signature. We then profiled that against known nerve agent signatures to do sample identification. Not an easy chip to do DSP (whatever that was). Ken






