简体   繁体   中英

Optimizing Fenwick Tree (C++)

I have the following implementation of the Fenwick Tree algorithm.

#include <bits/stdc++.h>
#define LSOne(S) (S & (-S))
using namespace std;
typedef long long int L;

int rsq(L b, L x[]){
    int sum=0;
    for(; b; b-=LSOne(b)) sum+=x[b];
    return sum;
}

void adjust(int k, L v, L x[], int n){
    for(; k<=n;k+=LSOne(k)) x[k]+=v;
}

int main(){
    int n,q; cin>>n>>q;
    L ft[n+1]; memset(ft,0,sizeof(ft));
    while(q--){
        char x; int i,j; cin>>x;
        if(x=='+'){
            cin>>i>>j;
            adjust(i+1,j,ft,n);
        } else if(x=='?') {
            cin>>i;
            cout<<rsq(i,ft)<<endl;
        }
    }
}

This program should be able to run for N<=5000000 and process Q<=5000000 . And it should be able to run under 9 sec. But after submitting this problem I was given a verdict of Time Limit Exceeded (TLE). I have tried every measure to optimize this code but to no avail, it still gave me TLE. How could I possibly optimize this code so that it could run under 9 secs. Thank you very much.

The time needed to read 500000 lines from stdin could be the problem. Did you try to optimize IO buffering:

int main() {
    cin.tie(nullptr);
    std::ios::sync_with_stdio(false);
    ...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM