简体   繁体   中英

Binary Search on Large Disk File in C - Problems

This question recurs frequently on StackOverflow, but I have read all the previous relevant answers, and have a slight twist on the question.

I have a 23Gb file containing 475 million lines of equal size, with each line consisting of a 40-character hash code followed by an identifier (an integer).

I have a stream of incoming hash codes - billions of them in total - and for each incoming hash code I need to locate it and print out corresponding identifier. This job, while large, only needs to be done once.

The file is too large for me to read into memory and so I have been trying to usemmap in the following way:

codes = (char *) mmap(0,statbuf.st_size,PROT_READ,MAP_SHARED,codefile,0); 

Then I just do a binary search using address arithmetic based on the address in codes.

This seems to start working beautifully and produces a few million identifiers in a few seconds, using 100% of the cpu, but then after some, seemingly random, amount of time it slows down to a crawl. When I look at the process using ps, it has changed from status "R" using 100% of the cpu, to status "D" (diskbound) using 1% of the cpu.

This is not repeatable - I can start the process off again on the same data, and it might run for 5 seconds or 10 seconds before the "slow to crawl" happens. Once last night, I got nearly a minute out of it before this happened.

Everything is read only, I am not attempting any writes to the file, and I have stopped all other processes (that I control) on the machine. It is a modern Red Hat Enterprise Linux 64-bit machine.

Does anyone know why the process becomes disk-bound and how to stop it?

UPDATE:

Thanks to everyone for answering, and for your ideas; I had not previously tried all the various improvements before because I was wondering if I was somehow using mmap incorrectly. But the gist of the answers seemed to be that unless I could squeeze everything into memory, I would inevitable run into problems. So I squashed the size of the hash code to the size of the leading prefix that did not create any duplicates - the first 15 characters were enough. Then I pulled the resulting file into memory, and ran the incoming hash codes in batches of about 2 billion each.

The first thing to do is split the file.

Make one file with the hash-codes and another with the integer ids. Since the rows are the same then it will line up fine after the result is found. Also you can try an approach that puts every nth hash into another file and then stores the index.

For example, every 1000th hash key put into a new file with the index and then load that into memory. Then binary scan that instead. This will tell you the range of 1000 entries that need to be further scanned in the file. Yes that will do it fine! But probably much less than that. Like probably every 20th record or so will divide that file size down by 20 +- if I am thinking good.

In other words after scanning you only need to touch a few kilobytes of the file on disk.

Another option is to split the file and put it in memory on multiple machines. Then just binary scan each file. This will yield the absolute fastest possible search with zero disk access...

Have you considered hacking a PATRICIA trie algorithm up? It seems to me that if you can build a PATRICIA tree representation of your data file, which refers to the file for the hash and integer values, then you might be able to reduce each item to node pointers (2*64 bits?), bit test offsets (1 byte in this scenario) and file offsets (uint64_t, which might need to correspond to multiple fseek()s).

Does anyone know why the process becomes disk-bound and how to stop it?

Binary search requires a lot of seeking within the file. In the case where the whole file doesn't fit in memory, the page cache doesn't handle the big seeks very well, resulting in the behaviour you're seeing.

The best way to deal with this is to reduce/prevent the big seeks and make the page cache work for you.

Three ideas for you:

If you can sort the input stream , you can search the file in chunks, using something like the following algorithm:

code_block <- mmap the first N entries of the file, where N entries fit in memory
max_code <- code_block[N - 1]
while(input codes remain) {
  input_code <- next input code
  while(input_code > max_code)  {
    code_block <- mmap the next N entries of the file
    max_code <- code_block[N - 1]
  }
  binary search for input code in code_block
}

If you can't sort the input stream , you could reduce your disk seeks by building an in-memory index of the data. Pass over the large file, and make a table that is:

record_hash, offset into file where this record starts

Don't store all records in this table - store only every Kth record. Pick a large K, but small enough that this fits in memory.

To search the large file for a given target hash, do a binary search in the in-memory table to find the biggest hash in the table that is smaller than the target hash. Say this is table[h] . Then, mmap the segment starting at table[h].offset and ending at table[h+1].offset , and do a final binary search. This will dramatically reduce the number of disk seeks.

If this isn't enough, you can have multiple layers of indexes:

 record_hash, offset into index where the next index starts

Of course, you'll need to know ahead of time how many layers of index there are.


Lastly, if you have extra money available you can always buy more than 23 gb of RAM, and make this a memory bound problem again (I just looked at Dell's website - you pick up a new low-end workstation with 32 GB of RAM for just under $1,400 Australian dollars). Of course, it will take a while to read that much data in from disk, but once it's there, you'll be set.

Instead of using mmap , consider just using plain old lseek + read . You can define some helper functions to read a hash value or its corresponding integer:

void read_hash(int line, char *hashbuf) {
    lseek64(fd, ((uint64_t)line) * line_len, SEEK_SET);
    read(fd, hashbuf, 40);
}

int read_int(int line) {
    lseek64(fd, ((uint64_t)line) * line_len + 40, SEEK_SET);
    int ret;
    read(fd, &ret, sizeof(int));
    return ret;
}

then just do your binary search as usual. It might be a bit slower, but it won't start chewing up your virtual memory.

We don't know the back story. So it is hard to give you definitive advice. How much memory do you have? How sophisticated is your hard drive? Is this a learning project? Who's paying for your time? 32GB of ram doesn't seem so expensive compared to two days of work of person that makes $50/h. How fast does this need to run? How far outside the box are you willing to go? Does your solution need to use advanced OS concepts? Are you married to a program in C? How about making Postgres handle this?

Here's is a low risk alternative. This option isn't as intellectually appealing as the other suggestions but has the potential to give you significant gains. Separate the file into 3 chunks of 8GB or 6 chunks of 4GB (depending on the machines you have around, it needs to comfortably fit in memory). On each machine run the same software, but in memory and put an RPC stub around each. Write an RPC caller to each of your 3 or 6 workers to determine the integer associated with a given hash code.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM