Hi all, this is a bit off topic, but in my case it is the visual DSP compiler from Analog that I refer to. Lets say, I have a preprocessor statement like: #define FRAME_LEN 8192 #define FM_SCALE (0x100000000 / FRAME_LEN) This value will be put to a 32 bit unsigned int, no problem at all, and it does work. However, the literal value 0x100000000 would not fit in 32 bit arithmetic. My question: What precision can I assume the compiler uses when calculating literal values like this? Would 0x10000000000000000 still work? Is there any ANSI or other rule? Best regards, Andre
OT: precision of compilers / preprocessors?
Started by ●January 27, 2011
Reply by ●January 27, 20112011-01-27
On Jan 27, 11:08�am, Andre <lod...@pathme.de> wrote:> Hi all, > > this is a bit off topic, but in my case it is the visual DSP compiler > from Analog that I refer to. > > Lets say, I have a preprocessor statement like: > > #define FRAME_LEN � � � 8192 > #define FM_SCALE � � � �(0x100000000 / FRAME_LEN) > > This value will be put to a 32 bit unsigned int, no problem at all, and > it does work. > > However, the literal value 0x100000000 would not fit in 32 bit arithmetic. > > My question: What precision can I assume the compiler uses when > calculating literal values like this? Would 0x10000000000000000 still work? > > Is there any ANSI or other rule? > > Best regards, > > AndreYou should really ask this in the C language forum. From what I remember the language spec on sets a minimum on the number of bits in the representation. The exact implementation can vary from compiler to compiler and machine to machine. You'll have to look in the type.h or ctypes.h (I can't remember which it is) for exact details. This is why it is good practice to define your own types explicitly in 'C'. Then when moving to a different compiler/machine you could just redo the definitions in one place rather than all throughout your code. Hope that helps. Cheers, Dave
Reply by ●January 27, 20112011-01-27
On 01/27/2011 08:08 AM, Andre wrote:> Hi all, > > this is a bit off topic, but in my case it is the visual DSP compiler > from Analog that I refer to. > > Lets say, I have a preprocessor statement like: > > #define FRAME_LEN 8192 > #define FM_SCALE (0x100000000 / FRAME_LEN) > > > This value will be put to a 32 bit unsigned int, no problem at all, and > it does work. > > However, the literal value 0x100000000 would not fit in 32 bit arithmetic. > > My question: What precision can I assume the compiler uses when > calculating literal values like this? Would 0x10000000000000000 still work? > > Is there any ANSI or other rule?1: I don't know off hand, and neither should you expect folks reading your code to. 2: The important stuff happens during the actual compile, not during preprocessing (or at least it should) 2a: or at least I think it does -- this would be a good thing to check (using cpp). 3: You could get your hands on the ANSI standard and check. 4: If 0x100000000 works, then anything that fits into the largest processor integer type (presumably 64 bit, although whether it's 'long' or 'long long' is up to the compiler) 5a: I have had compilers interpret oversized constants as truncated, quietly, without muss or fuss (until later). 5b: I have had compilers issue warnings about oversized constants, then either truncate or promote them to a larger type. 5c: If I've had compilers quietly use the larger constant I don't recall, although it certainly may have happened. 6: Points 1 and 5 show you why good coding style demands that you use type specifiers for your constants. I suspect that your compiler supports "long long", and that it automatically (and quietly) promotes too-long integers. You should find out how your compiler does this (probably "LL" at the end of the number) and do it explicitly. Like Dave said, running this by a C language newsgroup is probably a good idea. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" was written for you. See details at http://www.wescottdesign.com/actfes/actfes.html
Reply by ●January 27, 20112011-01-27
Andre wrote:> Hi all, > > this is a bit off topic, but in my case it is the visual DSP compiler > from Analog that I refer to.There are at least 4 different VDSP compilers from Analog.> Lets say, I have a preprocessor statement like: > > #define FRAME_LEN 8192 > #define FM_SCALE (0x100000000 / FRAME_LEN) > > > This value will be put to a 32 bit unsigned int, no problem at all, and > it does work. > > However, the literal value 0x100000000 would not fit in 32 bit arithmetic. > > My question: What precision can I assume the compiler uses when > calculating literal values like this? Would 0x10000000000000000 still work? > > Is there any ANSI or other rule?1. The integer type by default is <int>. Hence all literals are interpreted as <int> unless type is stated explicitly. Whatever is the definition of <int> in your toolchain. 2. If the literal value does not fit into the type, the compiler should issue a warning. 3. Don't use macros. Use constants. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by ●January 27, 20112011-01-27
On 27/01/2011 16:08, Andre wrote:> Hi all, > > this is a bit off topic, but in my case it is the visual DSP compiler > from Analog that I refer to. > > Lets say, I have a preprocessor statement like: > > #define FRAME_LEN 8192 > #define FM_SCALE (0x100000000 / FRAME_LEN) > > > This value will be put to a 32 bit unsigned int, no problem at all, and > it does work. > > However, the literal value 0x100000000 would not fit in 32 bit arithmetic. > > My question: What precision can I assume the compiler uses when > calculating literal values like this? Would 0x10000000000000000 still work?Easy enough to try it and find out!> > Is there any ANSI or other rule? >Integer literals are converted to the largest available integral type - so if an integer literal is too large for an int (default in all cases is unsigned), it will be compiled as a long if larger - which of course in the above case we hope is 8 bytes. If it is not, the results are "undefined". That does not mean it won't work - it just means that the behaviour in such cases is not mandated by the standard (same for ANSI C and C++). So RTFM. Arithmetic that mixes types will employ automatic promotion where necessary (e.g. char to int, float to double, int to long). So the value 0x100000000 will be converted into the largest available integral type, as will 8192 via promotion, and the final result will then be assigned to the integer variable. Otherwise, the compiler should certainly issue a warning that the value is too large for the available type. It is the sort of warning that should not as a rule be ignored. The general point is that such code is not reliably portable, as different compilers may handle that undefined behaviour in different ways. So unless you really need that large explicit literal, it is better to document the code as needed and define a symbol (whether as a const or a macro) that fits the standard type. All modern compilers for mainstream platforms support "long long" these days (and gcc issues a warning anyway); what a compiler will do for a dsp chip is another matter. The same issue applies to floating-point literals of course; you can find plenty of code where a very very long macro literal for pi or whatever is defined (might have 30 significant figures); this (if it works at all!) relies on there being a "long double" of 16bytes or more that can support that level of precision. Otherwise, it is simply truncated into whatever the system supports; a mere 17 decimal sig figs in an 8-byte double. Most of the time when I see such code I attribute it to general navel-contemplation on the part of the programmer saying "look how precise I am!". Not. Richard Dobson
Reply by ●January 28, 20112011-01-28
On 27/01/2011 17:08, Andre wrote:> Hi all, > > this is a bit off topic, but in my case it is the visual DSP compiler > from Analog that I refer to. > > Lets say, I have a preprocessor statement like: > > #define FRAME_LEN 8192 > #define FM_SCALE (0x100000000 / FRAME_LEN) > > > This value will be put to a 32 bit unsigned int, no problem at all, and > it does work. > > However, the literal value 0x100000000 would not fit in 32 bit arithmetic. > > My question: What precision can I assume the compiler uses when > calculating literal values like this? Would 0x10000000000000000 still work? > > Is there any ANSI or other rule? > > Best regards, > > AndreI wouldn't bother asking in comp.lang.c - this forum is a better choice. The /theory/ behind the OP's question - what ANSI says - may be best answered in c.l.c. But he is really interested in the /practice/ - what does this particular compiler do, and the best forum is one where people use that compiler. This is the sort of issue where compilers often stray from strict ANSI compliance, so the strict "laws" are not very relevant. Unfortunately, it seems no one has a definite answer. There is a distinction between "host" type sizes, and "target" type sizes. Constant calculations like this are handled by the compiler on the host, not the target, and the host may well work with higher integer sizes than the target. I believe the correct handling of the compiler is to treat the calculations as though they were to be executed on the target, promoting the values as appropriate according to C rules. But I would be careful about relying on the compiler following this exactly. A brief test with gcc for an 8-bit target (16-bit ints) shows it has no problems working with 64-bit values for such calculations. However, although the target is 8-bit, gcc supports 64-bit ints on the target as well as the host. Personally, I would feel safer writing: #define FRAME_LEN 8192 #define FM_SCALE (0x80000000ul / (FRAME_LEN / 2)) This will work on any compiler. Alternatively, use Vladmir's suggestion of using const: static const uint32_t FRAME_LEN = 8192; static const uint32_t FM_SCALE = (0x100000000 / FRAME_LEN); If the compiler doesn't like it, it should give you an error. Your code is neater and has stronger type-safety. However, this relies on the compiler doing it's job well - some compiler's can't optimise code with constants as well as they can with macros.
Reply by ●January 28, 20112011-01-28
On 27/01/2011 17:50, Dave wrote:> On Jan 27, 11:08 am, Andre<lod...@pathme.de> wrote: >> Hi all, >> >> this is a bit off topic, but in my case it is the visual DSP compiler >> from Analog that I refer to. >> >> Lets say, I have a preprocessor statement like: >> >> #define FRAME_LEN 8192 >> #define FM_SCALE (0x100000000 / FRAME_LEN) >> >> This value will be put to a 32 bit unsigned int, no problem at all, and >> it does work. >> >> However, the literal value 0x100000000 would not fit in 32 bit arithmetic. >> >> My question: What precision can I assume the compiler uses when >> calculating literal values like this? Would 0x10000000000000000 still work? >> >> Is there any ANSI or other rule? >> >> Best regards, >> >> Andre > > You should really ask this in the C language forum. From what I > remember the language spec on sets a minimum on the number of bits in > the representation. The exact implementation can vary from compiler to > compiler and machine to machine. You'll have to look in the type.h or > ctypes.h (I can't remember which it is) for exact details. > > This is why it is good practice to define your own types explicitly in > 'C'. Then when moving to a different compiler/machine you could just > redo the definitions in one place rather than all throughout your > code. >Don't define your own types - use #include <stdint.h> and use the standard types int32_t, uint32_t, and so on. That's what <stdint.h> is there for! But I fully agree on the principle of using fixed size types when you know you want a fixed size, rather than relying on the compiler's default types.
Reply by ●January 28, 20112011-01-28
On 27/01/2011 19:25, Vladimir Vassilevsky wrote:> > > Andre wrote: > >> Hi all, >> >> this is a bit off topic, but in my case it is the visual DSP compiler >> from Analog that I refer to. > > There are at least 4 different VDSP compilers from Analog. > >> Lets say, I have a preprocessor statement like: >> >> #define FRAME_LEN 8192 >> #define FM_SCALE (0x100000000 / FRAME_LEN) >> >> >> This value will be put to a 32 bit unsigned int, no problem at all, >> and it does work. >> >> However, the literal value 0x100000000 would not fit in 32 bit >> arithmetic. >> >> My question: What precision can I assume the compiler uses when >> calculating literal values like this? Would 0x10000000000000000 still >> work? >> >> Is there any ANSI or other rule? > > 1. The integer type by default is <int>. Hence all literals are > interpreted as <int> unless type is stated explicitly. Whatever is the > definition of <int> in your toolchain. >Not quite right - a literal is interpreted as an int if it fits within an int. It is interpreted progressively as an unsigned int, long int, unsigned long int, long long int, unsigned long long int as necessary until a big enough type is found. So 0x1_0000_0000 (it's a pity C doesn't actually support Ada's underscore here) is interpreted as a long long even without a LL suffix. But 0x01 is an int, even if it fits within a short int or char.> 2. If the literal value does not fit into the type, the compiler should > issue a warning. >Compiler's /should/ issue warnings for such overflows, but you can't be sure that a particular compiler /will/ issue a warning - some will truncate things silently. I have no idea how good VDSP is at that sort of thing - I'd recommend trying it and checking.> 3. Don't use macros. Use constants.I agree 95% here. Check if the compiler will generate good code with const values - some do much worse, and you lose all the optimisations that you would otherwise get with macro-defined constants. /If/ your compiler is good enough, then it is much better to use consts (and similarly use inline functions rather than macros) in almost all cases. Also note that for constant values like this, "static const" is typically a better choice than plain "const", as the compiler can generate better code.
Reply by ●January 28, 20112011-01-28
On 28.01.2011 09:23, David Brown wrote:> On 27/01/2011 19:25, Vladimir Vassilevsky wrote:- snip ->> 3. Don't use macros. Use constants. > > I agree 95% here. Check if the compiler will generate good code with > const values - some do much worse, and you lose all the optimisations > that you would otherwise get with macro-defined constants. /If/ your > compiler is good enough, then it is much better to use consts (and > similarly use inline functions rather than macros) in almost all cases. > > Also note that for constant values like this, "static const" is > typically a better choice than plain "const", as the compiler can > generate better code. > >Correct. In my case, the value of SCALE is an exponent of 2, and the compiler translates the division by it to a shift operation (>>19). #define FRAME_LEN 8192 #define SCALE (0x100000000 / FRAME_LEN) Andre
Reply by ●January 28, 20112011-01-28
>this is a bit off topic, but in my case it is the visual DSP compiler >from Analog that I refer to. > >Lets say, I have a preprocessor statement like: > >#define FRAME_LEN 8192 >#define FM_SCALE (0x100000000 / FRAME_LEN) > > >This value will be put to a 32 bit unsigned int, no problem at all, and >it does work. > >However, the literal value 0x100000000 would not fit in 32 bitarithmetic.> >My question: What precision can I assume the compiler uses when >calculating literal values like this? Would 0x10000000000000000 stillwork? I make a habit of having a DEBUG build of my DSP code which displays important derived or calculated constants so that I can manually double-check them. The Texas Instruments C55x compiler and debugging environment allows for printf(), even in an embedded system, by using the JTAG emulator. Thus, I can see my constants printed on the screen when I run the Debug firmware. If printf() is not available, then perhaps there is some other I/O you have available that could be used as confirmation. But whatever you do, if you use these confirmations techniques, then make sure they are not compiled into the Release code - they're totally useless after your code is finished. Brian Willoughby Sound Consulting






