简体   繁体   中英

Concurrent read/write file in Java

I have to read a text file from my Java application.

The file contains many rows and this file is updated every X minutes from an external unknown application that appends new lines to the file.

I have to read all the rows from the file and then I have to delete all the records that I've just read.

Is it possibile to let me read the file row by row, deleting each row I read and at the same time allowing the external application to append other rows to the file?

This file is located in a Samba shared folder so I'm using jCIFS to read/write the file and BufferedReader Java class.

thanks in advance

I don't know the perfect solution to your problem, but I would solve it differently:

  • rename the file (give it a unique name with an timestamp)
  • the appender job will then automatically re-create it
  • process your time-stamped files (no need to delete them, keep them in place so you can later check what happened)

Problem is we don't know how the external application write and/or reuse this file. It could be a problem if you delete rows while the external application use a counter to run correctly...

There is no good solution unless you know how the other app works.

Is it possibile to let me read the file row by row, deleting each row I read and at the same time allowing the external application to append other rows to the file?

Yes, you can open the same file for reading and writing from multiple processes. In Linux, for example, you will get two separate file descriptors for the same file. For file writes under the size of PIPE_BUF, or 4096 bytes in Linux, it is safe to assume the operations are atomic, meaning the kernel is handling the locking and unlocking to prevent race conditions.

Assuming Process A is writing to the file has opened it as APPEND, then each time Process A tells the kernel to write() it will first seek to the size of the file (the end of the file). That means you can safely delete data in the file from Process B as long it is done in between the write operations of Process A. And as long as the write operations from Process A don't exceed PIPE_BUF, Linux guarantees they will be atomic, ie Process A can spam write operations and process B can constantly delete/write data, and no funky behavior will result.

Java provides you with implemented File Locks . But it's important to understand that it is only "advisory," not "mandatory." Java does not enforce the restriction, both processes must implement a check to see if another process holds the lock.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM