简体   繁体   中英

comparing the time taken by mathematical operations

Which implementation whould be faster (in theory)-

long int total=0;
for(int n=1;n<=200000;)`
{ 
  total=total + n*(++n);
}

or

 long int total=0,sum=0;
for(int n=1;n<=200000;n++)`
{ 
  sum =sum+2*n;
  total=total + sum;
}

Or will there not be any difference?

What you need to do is write explicitly the two programs you want to compare. I think I know what you mean, but it would be definitive if you wrote it explicitly.

Assuming my guess is right about what you mean, asymptotic time complexity makes no sense for this question, because there is no free variable. n is "bound" to the loop. If you replaced 200000 with M then it would make some sense to talk about time complexity.

The question of which of the two programs that you didn't write is faster in absolute terms is also a poorly framed question, because the answer most likely depends on context, as Mark Dickinson has pointed out. If my guess is right about what the programs look like, then the number of arithmetic operations seems to be roughly the same. That could mean the programs would run in about the same amount of time, or maybe not. My guess is the time difference would be imperceptible, but why don't you just try it?

Finally, if you're going to use the short cut 2(1)+2(2)+...+2(n)=n(n+1) , then why don't you go further and use a short cut for the summation of 1(2)+2(3)+...+n(n+1) ? We have

n(n+1) = (1/3)((n+1)^3 - n^3) - (1/3)

so

1(2)+2(3)+...+n(n+1) 
= (1/3)(2^3-1^3) - (1/3) + (1/3)(3^3-2^3) - (1/3) + ... + (1/3)((n+1)^3-n^3) - (1/3)
= ((n+1)^3 - n - 1)/3

The series is "telescoping", so all the cubed terms in the middle cancel. The resulting expression can be evaluated in 5 arithmetic operations.

If you substitute n=200000 in the above, you'll see that int total will overflow for 32 bit int types. Is that your intent?

Update

I tried it on my system (MacBook Air/Yosemite). The programs I compared are

#include <stdio.h>

int main() {
  long int total;
  for (int trial=0; trial<1000; ++trial) {
    total = 0;
    for (int i=1; i<=200000; ++i) {
      total = total + i*(i+1);
    }
  }
  printf("answer is %ld", total);
  return 0;
}

and

#include <stdio.h>

int main() {
  long int total;
  int sum;
  for (int trial=0; trial<1000; ++trial) {
    total = 0;
    sum = 0;
    for (int i=1; i<=200000; ++i) {
      sum = sum + 2*i;
      total = total + sum;
    }
  }
  printf("answer is %ld", total);
  return 0;
}

I compiled both with gcc -O and then ran them under the unix time command. The clear winner (on my system, with its particular architecture, and with the choices I made) is the first program. It takes about 160 milliseconds, whereas the second takes about 240 milliseconds. The first runs in 2/3 the time of the second. I don't know enough about the architecture to speculate on the reason for the difference.

Note that the line in your program total = total + i*(i++); is incorrect. That calculates i*i then increments i . Furthermore, both that line and the improved total = total + i*(++i) generate a warning.

A few stylistic issues: I use i rather than n for a loop index, so people don't get confused about what n means. I habitually use ++x to increment x rather than x++ which may make an unnecessary temporary copy of the old x .

Well, if you are talking in terms of algorithm then complexity for first operation is higher than the second one. Complexity for first operation is O(1) where as in second case it will be O(n). So when your number of inputs increases, then probably first one will take less time than second one. Delay is directly proportional to machine instructions required to execute the operation.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM