Hi all, I'm sitting here calculating a bunch of constants to go into a C program, when it would be faster and clearer to just type the expressions into the code. It seems odd that I need to do this. It would seem sensible that I could write something like: const int32_t threshold = ((int32_t) (32768.0*pow(10.0, -30.0/10.0))); and have my constant worked out at compile time. With constants all the way through, and only well known core math function like pow() being used, the compiler is certainly able to resolve everything at compile time, and just stick the result in the object file. Compilers typically inline many trig functions, where there hardware supports their quick evaluation, so they aren't committed to spewing out a call to pow() in case a special pow() is provided at link time. However, no compiler I have tried seems to do this compile time evaluation of constant math expressions, regardless of the optimisation level I select. They seem to limit themselves to only evaluating the basic +- */ operators for floats, and logic and shft operators for integers. Has anyone met a compiler which does optimise things like this? It there some wording in the C spec which would make such an optimisation illegal? Regards, Steve
Compile time evaluation in C
Started by ●June 13, 2008
Reply by ●June 13, 20082008-06-13
steveu@coppice.org wrote:> Hi all, > > I'm sitting here calculating a bunch of constants to go into a C > program, when it would be faster and clearer to just type the > expressions into the code. It seems odd that I need to do this. It > would seem sensible that I could write something like: > > const int32_t threshold = ((int32_t) (32768.0*pow(10.0, -30.0/10.0))); > > and have my constant worked out at compile time. With constants all > the way through, and only well known core math function like pow() > being used, the compiler is certainly able to resolve everything at > compile time, and just stick the result in the object file. Compilers > typically inline many trig functions, where there hardware supports > their quick evaluation, so they aren't committed to spewing out a call > to pow() in case a special pow() is provided at link time. However, no > compiler I have tried seems to do this compile time evaluation of > constant math expressions, regardless of the optimisation level I > select. They seem to limit themselves to only evaluating the basic +- > */ operators for floats, and logic and shft operators for integers. > > Has anyone met a compiler which does optimise things like this? It > there some wording in the C spec which would make such an optimisation > illegal? >In this particular example I don't think the optimisation could ever be legitimate, as pow() is a library call and the compiler has no way of knowing which library you are subsequently going to be linking with or otherwise determining what the "correct' result of a call to pow() should be. A const expression such as this should only ever be evaluated once, however, so trying to force this evaluation to happen at compile-time is probably a pointless (and premature) optimisation. Paul
Reply by ●June 13, 20082008-06-13
On Jun 13, 5:43 pm, Paul Russell <pruss...@sonic.net> wrote:> ste...@coppice.org wrote: > > Hi all, > > > I'm sitting here calculating a bunch of constants to go into a C > > program, when it would be faster and clearer to just type the > > expressions into the code. It seems odd that I need to do this. It > > would seem sensible that I could write something like: > > > const int32_t threshold = ((int32_t) (32768.0*pow(10.0, -30.0/10.0))); > > > and have my constant worked out at compile time. With constants all > > the way through, and only well known core math function like pow() > > being used, the compiler is certainly able to resolve everything at > > compile time, and just stick the result in the object file. Compilers > > typically inline many trig functions, where there hardware supports > > their quick evaluation, so they aren't committed to spewing out a call > > to pow() in case a special pow() is provided at link time. However, no > > compiler I have tried seems to do this compile time evaluation of > > constant math expressions, regardless of the optimisation level I > > select. They seem to limit themselves to only evaluating the basic +- > > */ operators for floats, and logic and shft operators for integers. > > > Has anyone met a compiler which does optimise things like this? It > > there some wording in the C spec which would make such an optimisation > > illegal? > > In this particular example I don't think the optimisation could ever be > legitimate, as pow() is a library call and the compiler has no way of > knowing which library you are subsequently going to be linking with or > otherwise determining what the "correct' result of a call to pow() > should be.I've had trig functions transparently inlined for me with some compilers. Various standard library stuff gets treated as "built ins" these days. Simple stuff, like toupper() has been handled by macros for years (broken macros in the case of some Microsoft compilers). Writing something that looks like a function call is not much of a guarantee of anything with modern C compilers, when you are using names that you'll find in the standard headers.> A const expression such as this should only ever be evaluated once, > however, so trying to force this evaluation to happen at compile-time is > probably a pointless (and premature) optimisation.It sounds like you've never done anything deeply embedded. That little bit of floating point code will drag a mass of support library stuff into the runtime on many machines, and be a disaster. That's why I was tediously hand calculating a bunch of constants in the first place. Regards, Steve
Reply by ●June 13, 20082008-06-13
steveu@coppice.org wrote:> On Jun 13, 5:43 pm, Paul Russell <pruss...@sonic.net> wrote: >> ste...@coppice.org wrote: >>> Hi all, >>> I'm sitting here calculating a bunch of constants to go into a C >>> program, when it would be faster and clearer to just type the >>> expressions into the code. It seems odd that I need to do this. It >>> would seem sensible that I could write something like: >>> const int32_t threshold = ((int32_t) (32768.0*pow(10.0, -30.0/10.0))); >>> and have my constant worked out at compile time. With constants all >>> the way through, and only well known core math function like pow() >>> being used, the compiler is certainly able to resolve everything at >>> compile time, and just stick the result in the object file. Compilers >>> typically inline many trig functions, where there hardware supports >>> their quick evaluation, so they aren't committed to spewing out a call >>> to pow() in case a special pow() is provided at link time. However, no >>> compiler I have tried seems to do this compile time evaluation of >>> constant math expressions, regardless of the optimisation level I >>> select. They seem to limit themselves to only evaluating the basic +- >>> */ operators for floats, and logic and shft operators for integers. >>> Has anyone met a compiler which does optimise things like this? It >>> there some wording in the C spec which would make such an optimisation >>> illegal? >> In this particular example I don't think the optimisation could ever be >> legitimate, as pow() is a library call and the compiler has no way of >> knowing which library you are subsequently going to be linking with or >> otherwise determining what the "correct' result of a call to pow() >> should be. > > I've had trig functions transparently inlined for me with some > compilers. Various standard library stuff gets treated as "built ins" > these days. Simple stuff, like toupper() has been handled by macros > for years (broken macros in the case of some Microsoft compilers). > Writing something that looks like a function call is not much of a > guarantee of anything with modern C compilers, when you are using > names that you'll find in the standard headers. >I've never seen this with stuff from <math.h> but I have seen it with memcpy et al. Normally this kind of thing is controlled by compiler switches, since there are good reasons why you might not want this kind of "clever" inlining of library functions>> A const expression such as this should only ever be evaluated once, >> however, so trying to force this evaluation to happen at compile-time is >> probably a pointless (and premature) optimisation. > > It sounds like you've never done anything deeply embedded. That little > bit of floating point code will drag a mass of support library stuff > into the runtime on many machines, and be a disaster. That's why I was > tediously hand calculating a bunch of constants in the first place. >I've only been doing embedded stuff for about 30 years, so I've still got a lot to learn. ;-) It sounds like you're using a very poor tool chain if it pulls in a lot of code just for one math function. Does your linker not dead-strip unused code ? One solution I've used for this kind of problem in the past is to generate .h files programatically, e.g. using a C program or Perl script or some such. That way you can do compile-time evaluations that would not normally be possible with the C pre-processor, with the advantage (compared to doing it by hand) that it's automatic and can be incorporated into the build process). Paul Paul
Reply by ●June 13, 20082008-06-13
On 13 Jun, 11:31, ste...@coppice.org wrote:> Hi all, > > I'm sitting here calculating a bunch of constants to go into a C > program, when it would be faster and clearer to just type the > expressions into the code. It seems odd that I need to do this. It > would seem sensible that I could write something like: > > const int32_t threshold = ((int32_t) (32768.0*pow(10.0, -30.0/10.0))); > > and have my constant worked out at compile time....> Has anyone met a compiler which does optimise things like this? It > there some wording in the C spec which would make such an optimisation > illegal?This can be done in C++ by means of templates. I know there is some interaction between C and C++ in the sense that features that were introduced in C++ and turned out to work well there, are attempted included in C standards. I don't know if templates are candidates for such 'retrofitting', though. Rune
Reply by ●June 13, 20082008-06-13
steveu@coppice.org writes:> It there some wording in the C spec which would make such an > optimisation illegal?Yes. The ISO/IEC 9899:1999 (C99) spec says this: Constant expressions shall not contain assignment, increment, decrement, function-call, or comma operators, except when they are contained within a subexpression that is not evaluated. -- % Randy Yates % "She has an IQ of 1001, she has a jumpsuit %% Fuquay-Varina, NC % on, and she's also a telephone." %%% 919-577-9882 % %%%% <yates@ieee.org> % 'Yours Truly, 2095', *Time*, ELO http://www.digitalsignallabs.com
Reply by ●June 13, 20082008-06-13
On Jun 13, 6:32 am, ste...@coppice.org wrote:> On Jun 13, 5:43 pm, Paul Russell <pruss...@sonic.net> wrote: > > > > > ste...@coppice.org wrote: > > > Hi all, > > > > I'm sitting here calculating a bunch of constants to go into a C > > > program, when it would be faster and clearer to just type the > > > expressions into the code. It seems odd that I need to do this. It > > > would seem sensible that I could write something like: > > > > const int32_t threshold = ((int32_t) (32768.0*pow(10.0, -30.0/10.0))); > > > > and have my constant worked out at compile time. With constants all > > > the way through, and only well known core math function like pow() > > > being used, the compiler is certainly able to resolve everything at > > > compile time, and just stick the result in the object file. Compilers > > > typically inline many trig functions, where there hardware supports > > > their quick evaluation, so they aren't committed to spewing out a call > > > to pow() in case a special pow() is provided at link time. However, no > > > compiler I have tried seems to do this compile time evaluation of > > > constant math expressions, regardless of the optimisation level I > > > select. They seem to limit themselves to only evaluating the basic +- > > > */ operators for floats, and logic and shft operators for integers. > > > > Has anyone met a compiler which does optimise things like this? It > > > there some wording in the C spec which would make such an optimisation > > > illegal? > > > In this particular example I don't think the optimisation could ever be > > legitimate, as pow() is a library call and the compiler has no way of > > knowing which library you are subsequently going to be linking with or > > otherwise determining what the "correct' result of a call to pow() > > should be. > > I've had trig functions transparently inlined for me with some > compilers. Various standard library stuff gets treated as "built ins" > these days. Simple stuff, like toupper() has been handled by macros > for years (broken macros in the case of some Microsoft compilers). > Writing something that looks like a function call is not much of a > guarantee of anything with modern C compilers, when you are using > names that you'll find in the standard headers. > > > A const expression such as this should only ever be evaluated once, > > however, so trying to force this evaluation to happen at compile-time is > > probably a pointless (and premature) optimisation. > > It sounds like you've never done anything deeply embedded. That little > bit of floating point code will drag a mass of support library stuff > into the runtime on many machines, and be a disaster. That's why I was > tediously hand calculating a bunch of constants in the first place. > > Regards, > SteveSomewhat surprisingly, newer versions of GCC do this. They will optimize by replacing transcendental functions with constant arguments with their correct value. They use a library called MPFR for it. I'm not sure if they also do the pow() function or not, but if it's available for your platform, it might be worth a look. Otherwise, a really dirty way to accomplish this, depending on the accuracy you require, would be to write a macro that implemented a polynomial approximation to pow() with base 10. When expanded, that could be accomplished all at compile-time. Just include as many terms as you need to get a good approximation. You can do this a bit more cleanly with C++ template metaprogramming, but it essentially does the same thing. Jason
Reply by ●June 13, 20082008-06-13
Randy Yates wrote:> steveu@coppice.org writes: > >> It there some wording in the C spec which would make such an >> optimisation illegal? > > Yes. The ISO/IEC 9899:1999 (C99) spec says this: > > Constant expressions shall not contain assignment, increment, decrement, function-call, > or comma operators, except when they are contained within a subexpression that is not > evaluated.That's interesting, because every compiler I have tried is happy to accept my source code, and generate run time code to set the const variable. I assume they make sure the const variable is not in ROM, though I haven't really checked :-) Steve
Reply by ●June 13, 20082008-06-13
On Fri, 13 Jun 2008 12:07:42 +0100, Paul Russell wrote:> steveu@coppice.org wrote: >> On Jun 13, 5:43 pm, Paul Russell <pruss...@sonic.net> wrote: >>> ste...@coppice.org wrote: >>>> Hi all, >>>> I'm sitting here calculating a bunch of constants to go into a C >>>> program, when it would be faster and clearer to just type the >>>> expressions into the code. It seems odd that I need to do this. It >>>> would seem sensible that I could write something like: const int32_t >>>> threshold = ((int32_t) (32768.0*pow(10.0, -30.0/10.0))); and have my >>>> constant worked out at compile time. With constants all the way >>>> through, and only well known core math function like pow() being >>>> used, the compiler is certainly able to resolve everything at compile >>>> time, and just stick the result in the object file. Compilers >>>> typically inline many trig functions, where there hardware supports >>>> their quick evaluation, so they aren't committed to spewing out a >>>> call to pow() in case a special pow() is provided at link time. >>>> However, no compiler I have tried seems to do this compile time >>>> evaluation of constant math expressions, regardless of the >>>> optimisation level I select. They seem to limit themselves to only >>>> evaluating the basic +- */ operators for floats, and logic and shft >>>> operators for integers. Has anyone met a compiler which does optimise >>>> things like this? It there some wording in the C spec which would >>>> make such an optimisation illegal? >>> In this particular example I don't think the optimisation could ever >>> be legitimate, as pow() is a library call and the compiler has no way >>> of knowing which library you are subsequently going to be linking with >>> or otherwise determining what the "correct' result of a call to pow() >>> should be. >> >> I've had trig functions transparently inlined for me with some >> compilers. Various standard library stuff gets treated as "built ins" >> these days. Simple stuff, like toupper() has been handled by macros for >> years (broken macros in the case of some Microsoft compilers). Writing >> something that looks like a function call is not much of a guarantee of >> anything with modern C compilers, when you are using names that you'll >> find in the standard headers. >> >> > I've never seen this with stuff from <math.h> but I have seen it with > memcpy et al. Normally this kind of thing is controlled by compiler > switches, since there are good reasons why you might not want this kind > of "clever" inlining of library functions > >>> A const expression such as this should only ever be evaluated once, >>> however, so trying to force this evaluation to happen at compile-time >>> is probably a pointless (and premature) optimisation. >> >> It sounds like you've never done anything deeply embedded. That little >> bit of floating point code will drag a mass of support library stuff >> into the runtime on many machines, and be a disaster. That's why I was >> tediously hand calculating a bunch of constants in the first place. >> >> > I've only been doing embedded stuff for about 30 years, so I've still > got a lot to learn. ;-) > > It sounds like you're using a very poor tool chain if it pulls in a lot > of code just for one math function. Does your linker not dead-strip > unused code ? > > One solution I've used for this kind of problem in the past is to > generate .h files programatically, e.g. using a C program or Perl script > or some such. That way you can do compile-time evaluations that would > not normally be possible with the C pre-processor, with the advantage > (compared to doing it by hand) that it's automatic and can be > incorporated into the build process). >It's not at all uncommon for a great deal of floating point support to be dragged in the first time you use _any_ floating point. For small ROM footprints you have to avoid floats _entirely_. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply by ●June 13, 20082008-06-13
On Fri, 13 Jun 2008 02:31:37 -0700, steveu wrote:> Hi all, > > I'm sitting here calculating a bunch of constants to go into a C > program, when it would be faster and clearer to just type the > expressions into the code. It seems odd that I need to do this. It would > seem sensible that I could write something like: > > const int32_t threshold = ((int32_t) (32768.0*pow(10.0, -30.0/10.0))); > > and have my constant worked out at compile time. With constants all the > way through, and only well known core math function like pow() being > used, the compiler is certainly able to resolve everything at compile > time, and just stick the result in the object file. Compilers typically > inline many trig functions, where there hardware supports their quick > evaluation, so they aren't committed to spewing out a call to pow() in > case a special pow() is provided at link time. However, no compiler I > have tried seems to do this compile time evaluation of constant math > expressions, regardless of the optimisation level I select. They seem to > limit themselves to only evaluating the basic +- */ operators for > floats, and logic and shft operators for integers. > > Has anyone met a compiler which does optimise things like this? It there > some wording in the C spec which would make such an optimisation > illegal? > > Regards, > SteveWhen I have a ton of numbers to generate I'll write a script in whatever language is sensible (for me that's C++ or Scilab, although I'm sure that Perl, Python, or Ruby would be good choices for some things), and I'll generate a header file or .c file as part of the build. Other than remembering that it's the script and not the .c file that's the source that needs to be archived, and having to make sure that I don't lose the script-interpreting tool, this has worked quite well for me. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html






