简体   繁体   中英

single file opened in several ifstreams strange behavior

I am learning stuff about fstreams and stumbled upon strange issue.

I have very simple program, that copies file - copy code is here:

void CCopyFile::Start(){
    std::ifstream src(mSrc, std::ifstream::binary); // mSrc and mDst strings
    std::ofstream dst(mDst, std::ofstream::binary); // with path to files

    if (src.is_open() && dst.is_open()){
        mCurr = src.tellg();  // std::streampos
        src.seekg(0, std::ios::end);
        mFileSize = src.tellg() - mCurr; // std::streampos
        src.seekg(0, std::ios::beg);
        mCurr = 0;

        while (mCurr < mFileSize){
            if (mFileSize - mCurr < mBufSize){
                delete[] mBuf;
                mBufSize = mFileSize - mCurr;
                mBuf = new char[mFileSize - mCurr];

                src.read(mBuf, mBufSize);
                dst.write(mBuf, mBufSize);
                mCurr += mBufSize;
            } else {
                src.read(mBuf, mBufSize);
                dst.write(mBuf, mBufSize);
                mCurr += mBufSize;
            }
        }
        src.close();
        dst.close();
    }
}

If I lauch several parallel instances of this class to copy different files, everything is ok - for reference, here are console output of function that every 10 seconds checks progress of copying:

[d:\a] -> [d:\outfile]
[1456448MB] -> [5212616MB]
[d:\zz] -> [d:\outfile2]
[259200MB] -> [5212616MB]

But if I launch copying of same file few times - I get this:

[d:\a] -> [d:\out1]
[1375232MB] -> [5212616MB]
[d:\a] -> [d:\out2]
[1375232MB] -> [5212616MB]

The most interesting part here: If I launch 1 copy process - everything is nice, copied file is growing. If I launch second copy process of this file - second copy file will be created of same size of copy in first thread, and then I will always get, that both threads read the same file somehow exactly. I don't know. Maybe there are some unique lock on file by first ifstream?

Full code is available here -> http://pastebin.com/NRVvxuSg

The second read of the same file will be much faster than the first one since the data is allready cached in RAM. That means that if one thread lags behind the other one, its reads will become faster, and it will catch up with the thread that is ahead.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM