简体   繁体   中英

Calculating FLops

I am writing a program to calculate the duration that my CPU take to do one "FLops". For that I wrote the code below

before = clock();
y= 4.8;
x= 2.3;
z= 0;
for (i = 0; i < MAX; ++i){
 z=x*y+z;
}
printf("%1.20f\n", ( (clock()-before )/CLOCKS_PER_SEC )/MAX);

The problem that I am repeating the same operation. Doesn't the compiler optimize this sort of "Thing"? If so what I have to do to get the correct results?

I am not using the "rand" function so it does not conflict my result.

This has a loop-carried dependency and not enough stuff to do in parallel, so if anything is even executed at all, it would not be FLOPs that you're measuring, with this you will probably measure the latency of floating point addition. The loop carried dependency chain serializes all those additions. That chain has some little side-chains with multiplications in them, but they don't depend on anything so only their throughput matters. But that throughput is going to be better than the latency of an addition on any reasonable processor.

To actually measure FLOPs, there is no single recipe. The optimal conditions depend strongly on the microarchitecture. The number of independent dependency chains you need, the optimal add/mul ratio, whether you should use FMA, it all depends. Typically you have to do something more complicated than what you wrote, and if you're set on using a high level language, you have to somehow trick it into actually doing anything at all.

For inspiration see how do I achieve the theoretical maximum of 4 FLOPs per cycle?

Even if you have no compiler optimization going on (possibilities have already been nicely listed), your variables and result will be in cache after the first loop iteration and from then on your on the track with way more speed and performance than you would be, if the program would have to fetch new values for each iteration.

So if you want to calculate the time for a single flop for a single iteration of this program you would actually have to give new input for every iteration. Really consider using rand() and just seed with a known value srand(1) or so.

Your calculations should also be different; flops are the number of computations your program does so in your case 2*n (where n = MAX). To calculate the amount of time per flop divide time used by the amount of flops.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM