简体   繁体   中英

Mule 4 : Batch Processing : How to load million records in database for batch processing without overloading the memory?

Scenario: Read data from a database/csv file having million records for batch processing.

Issue: The size of the data is more than 5 GB in total. While loading the data, the applications throws a OutOfMemory error.

What are the different ways in which data can be loaded in the application without overloading the memory?

Since the question doesn't provide any details I will make an educated guess that the out of memory is related to something else rather than simple reading a big CSV. The CSV file reader in DataWeave will use disk buffering by default and read data in chunks, to avoid out of memory errors. Batch is designed to avoid out of memory errors by reading data from disk buffers by a number of records at a time. Maybe there are operations in the batch that are handling the size correctly. You should generate a heapdump at the crash and analyze it to try to understand where the memory is being spent.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM