简体   繁体   中英

How to read, update,insert, delete objects to a txt file in Java

I am creating a command line interface representing a bookeeping system. I need to use .txt files as an external database.Each book keeping building should be mapped to a list of all the books stored at that location. I want to be able to save each book keeping building object , which is mapped to all of its books, in the txt file and be able to read, update, insert, delete the same txt file even after my application has stopped running and started again.

public static ArrayList<Object> readObjects(){
    ArrayList<Object> al = new ArrayList<Object>();
    boolean cont = true;
        try {
            ObjectInputStream ois = new ObjectInputStream(new FileInputStream("outputFile"));
            while(cont){
                  Object obj=null;
                try {
                    obj = ois.readObject();
                } catch (ClassNotFoundException e) {
                    e.printStackTrace();
                }
                  if(obj != null)
                     al.add(obj);
                  else
                     cont = false;
               }
        } catch (FileNotFoundException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

return al;

my current implementation can only write the objects in the array into a new txt file, but not being able to update/delete when there's already an existing txt file with data from a previous session of the application

Is is even possible to update object object paramters/delete entire objects that are already saved in the txt file or would i have to reinstatiated the objects from the txt file, delete everything from said txt file, do whatever updates i need to do with the objects previously extracted, then finally write the objects(and or new objects), back into the txt file? Thanks everyone!

There are a few ways to do this:

  1. Load file into memory at the beginning of each operation, then save at end

    • Slow, cumbersome, but doesn't require you to know when the application will exit.
  2. Load file into memory at start of application, operate in memory, save to file when application closed

    • Has the advantage of not reading that often (once per run), but runs into an issue if the application suddenly quits.
  3. Save each book object in a file, with a key (book id? name?) as the filename.

    • Cons: id needs to be unique, cannot search for other attributes without opening (worst case) all the files. Lots of files in filesystem.
    • Pros: Quick to access a particular book. Easy to maintain (just open the file you need at each operation). Don't need to read any useless data.

Just a few options for you to consider.

A few comments here, with the caveat that I have not worked with directly serializing objects to a file for awhile.

  1. You say txt file but are using an object serializer that will write binary. Not a major issue but slightly confusing. If you need this to be text then you need a serializer that will serialize your objects in a text based format (eg CSV, JSON, XML). However this is likely incompatible with the below point.
  2. If the datasets are large, then the performant solution to avoid rewriting all the data in order to save (if in a single file) requires using fixed-size records so that they can be overwritten in place via RandomAccessFile . Deletion of records is then accomplished by marking them as deleted (not actually removing them from the file) and having compaction operations that will ultimately remove them when writing the data to a new file when necessary / triggered.
  3. Otherwise to update a file you need to write the whole file again with the new state. Given this is a command line application, every instance of running the command that results in an update would then mean writing the changed file(s) again. This is likely easier to implement than using RandomAccessFile but could be a performance issue for large data sets (100s of MB to GB etc).
  4. If re-writing all the files with the latest state, one should ideally write to a new temporary file and atomically move it to replace the old file. Failing that one should lock the file being written. This prevents corrupting the DB if there is an error (if the new writes fail somewhere the old file still exists) and prevents concurrency related issues.

Lastly, these problems have already been solved for you if you use a database system (many options).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM