Is there a significant performance hit using double precision (vs single)?

has anyone got any recommendations please?

Is there a significant performance hit using double precision (vs single)?

has anyone got any recommendations please?

Yes there are. The issue is not really with the operation itself but more about the moving of the data from and to memory. Double are twice as big as float, so it takes more memory bandwidth. Once the data is in a register it takes pretty much the same amount of time.

Hence the performance hit depends on your what your program does.

I never worry about that kind of stuff, it like int and int64. I will use a double if the requirement for precision ask for it, if not I use float.

I’m assuming x86 here.

I can’t imagine that the time to copy the extra bytes in a double is anything close to the time it takes to do a floating point multiply. Isn’t it just 1 bus transfer? But I’ve been wrong before.

what I’ve stumbled upon is that x86? floating point arithmetic is done in hardware and that double arithmetic has to be emulated in software (I’m paraphrasing here), so basically I’m going to switch all my doubles back to floats and keep an eye out for that literal gotcha

see gcc.gnu.org/onlinedocs/gcc/Warning-Options.html (and read the -Wdouble-promotion warning)

I think you are confusing two different things.

floats are 32 bit floating point numbers which are multiplied in hardware

doubles are 64 bit floating point numbers which are multiplied in hardware

The x86 floating point unit computes using 80bit internal numbers

floating point is often not exact.

32 bit integer multiplication is done in hardware

64 bit integer multiplication needs software support however it may still be faster than floating point

I imagine it can be done in hardware on a 64bit processor, but QNX doesn’t support this yet.

So if you don’t need floating point, integer multiplication is usually preferred.

I’m ready to be corrected here, but I was going on this (from link): "CPUs with a 32-bit â€œsingle-precisionâ€

Since the days of the 8088, (when you needed to buy a separate 8087 chip) intel hardware floating point is as I described, internally using 80bits of precision to minimize rounding error. Externally you only get 32 or 64 bit numbers.

This article describes the issue somewhat:

en.wikipedia.org/wiki/Extended_precision

You can use “long double” to get 80bits on x86 with GCC. Other compilers may use different size.