Page **1** sur **1**

### chosing gmp or double/float in C++

Publié : **sam. nov. 18, 2017 2:44 pm**

par **jocaps**

I am interested in using giac in some engineering/scientific computation in C++ that would require real-time polynomial arithmetic. I am however concerned about the speed if I require high precision. Luckily, I am satisfied with float precision.

I am pretty sure I can set the precision in giac so that my real numbers are computed in float (32 bit) precision. My question is however if this is comparable with computing with the usual floats in C++. Imagine for instance that I have Pi in float precision in native C++ and take its n-th power (for n large enough natural number). Will the speed of obtaining Pi^n be the same if I had Pi stored as a gen variable "p" and then took "p^n"? I will do some benchmark to test this, but I am curious if someone has already done this?

Jose

### Re: chosing gmp or double/float in C++

Publié : **sam. nov. 18, 2017 5:01 pm**

par **parisse**

If you are working with desktop processor, the speed with floats is the same as with doubles, except if you have large data and the speed is governed by memory access (floats are 2* shorter than double). Giac does not support float, a giac::gen is by default 64 bits, and if you store approx reals inside with default precision (more precisely Digits<=14), they will be represented by truncated double (48 bits mantissa instead of 53, 5 bits are used for type discrimination). Usual arithmetic (+,-,*...) on approx numbers represented by builtin double are significantly faster than with giac::gen (perhaps 5*). The same is true for small integers (<2^62) represented by long long instead of giac::gen. For more complex data structures like matrix products, Giac does argument checking and converts to intermediate matrices of double, then does the product on this representation, and converts back, you will not win by coding yourself unless you write well optimized code.

For the first kind of operations, I'm currently working on a cpp command that will translate an Xcas function with typed variables to a C++ loadable module. There is an example in the latest source tarball in examples/Exemple/demo/giac_Mandelbrot.cpp, it was initially generated with cpp in Mandelbrot.xws (same directory).

### Re: chosing gmp or double/float in C++

Publié : **dim. nov. 19, 2017 10:07 am**

par **jocaps**

The results are not very optimistic for me. I took the number 1.2 (as a double) and use std::pow to take it's 1000th power 1000-times (so I am computing 1.2^1000 1000 times). In my (rather slow) computer I get the following results:

Code : Tout sélectionner

```
Time for giac: 0.00147427
Giac result: 1.51791008917e+079
Time for std: 7.33105e-007
std result: 1.51791e+079
```

The worst part is that the time you see with std::pow is probably just an overhead. This is because if you loop this 1000 more times you get (so doing this 1000000 times):

Code : Tout sélectionner

```
Time for giac: 1.4433
Giac result: 1.51791008917e+079
Time for std: 7.33105e-007
std result: 1.51791e+079
```

I cannot explain why std has not become slower here, it might be due to some compiler optimizations (I am using msvc). In any case, I feel that the speed of std::pow with doubles is much more than 5-times.

**Edit:** I removed msvc compiler optimization and I get a more believable result:

1000 loops of 1.2^1000 gives

Code : Tout sélectionner

```
Time for giac: 0.0015329
Giac result: 1.51791008917e+079
Time for std: 8.79725e-006
std result: 1.51791e+079
```

1000000 loops of 1.2^1000 gives

Code : Tout sélectionner

```
Time for giac: 1.45106
Giac result: 1.51791008917e+079
Time for std: 0.00911909
std result: 1.51791e+079
```

This tells me that giac is approximately 150 times slower.

All times were in seconds and taken using cpu ticks.

### Re: chosing gmp or double/float in C++

Publié : **dim. nov. 19, 2017 10:23 am**

par **parisse**

pow is slower than other arihtmetic operations on gen, it has a lot of special case to handle, but it is not representative of numeric algorithms, you don't take pow that often. I would expect a factor around 10 for more representative operations like + - *.

### Re: chosing gmp or double/float in C++

Publié : **dim. nov. 19, 2017 10:54 am**

par **jocaps**

parisse a écrit :pow is slower than other arithmetic operations on gen, it has a lot of special case to handle, but it is not representative of numeric algorithms, you don't take pow that often. I would expect a factor around 10 for more representative operations like + - *.

I see. So how does giac evaluate univariate polynomials? It just repeats * several times to evaluate the terms? If I do a loop taking * (just naive loop and not fast power where you do exponentiation by squaring) 1000 times on 1.2 (with gen) I get a slower result than pow (on gen). If I do 1000000 times the time was so slow that I had to kill the process.

In any case, I do believe that polynomial evaluations with giac would still be beneficial to me.

### Re: chosing gmp or double/float in C++

Publié : **dim. nov. 19, 2017 11:31 am**

par **parisse**

For univariate polynomials, you can convert the polynomial to a list of coefficients with symb2poly, then call horner.