简体   繁体   中英

What is the fastest way to read, sort and merge multiple files in Java?

I am working on a project that deals with reading and processing huge .txt files containing various data for certain individuals.

Multiple files are to be read and sorted by the individual ID (which is present in all files) and then merged, in terms of retrieving all the entries from all the files that are assigned to the same ID. In other words, each individual can have multiple entries (ie, lines) in every file. I need to retrieve all info that I find regarding one ID, store it and then pass to the next one.

Until now I've tried FileChannel , FileInputStream and MappedFileBuffer , but apparently the best suited for my case is FileInputStream with a BufferedReader and to compare them I saw that Collection.sort() is recommended. An important issue is that I am not aware of the performance of the PCs that are going to make use of the application and the files can be bigger than 2GB. Any help would be appreciated.

If the files are large enough you will have to use an external sort, in which case a database really starts to become the most practical alternative. There are no external sort methods in the JDK.

If you expect to be processing more data than the target environment can fit into memory then you will either have to use some form of on-disk streaming or reparse the file multiple times.

The decision as to which option to pursue depends on the distribution of data.

If there are relatively few lines per id (ie lots of distinct ids) then reparsing will be the slowest assuming you need the collated results for all ids.

If there are relatively few ids (ie lots of lines) then reparsing may become more efficient.

My guess is that reparsing for each id will be inefficient in the general case (but if you know there are maybe <10 distinct ids then I would consider a reparse based solution)

The idea then is that you parse the file just once putting the results into a kind of map of lists...

Map<Id,List<Record>>

The problem you face is that you don't have enough memory to hold such a map...

So you will need to create an intermediary temporary on disk store to hold the lists for each id.

You have two options for the on disk store:

  1. Roll your own

  2. Use a database (eg derby or hsqldb or ...)

Option 1 is more work but you can optimise for your use case (namely writing by append only, and then at the end read all the records back in and sort them)

Option 2 will be easier and quicker to implement at the risk of performance as the database will be maintaining an index on the ids in case you want to randomly read the data while parsing (which you don't in this use case)...

If I had to choose I would start with option 2 and only introduce the maintenance headache on myself that option 1 will be if performance is sub-optimal. (avoid premature optimisation)

You will need to use a buffered reader (with a really large (64k) buffer to avoid trashing the disk with competing read/write operations (disk is what will kill performance)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM