简体   繁体   中英

File close issue windows

We are using File.WriteAllBytes to write data to the disk. But if a reboot happens just about the time when we close the file, windows adds null to the file. This seems to be happening on Windows 7. So once we come back to the file we see nulls in the file. Is there a way to prevent this. Is windows closing it's internal handle after certain time and can this be forced to close immediately?.

Depending on what behavior you want; you can either put it in a UPS as 0A0D suggested; but in addition you can use Windows' Vista+ Transactional NTFS functionality. This allows you to atomically write to the file system. So in your case; nothing would be written rather than improper data. It isn't directly part of the .NET Framework yet; but there are plenty of managed wrappers to be found online.

Sometimes no data is better than wrong data. When your application starts up again; it can see that the file is missing; it can "continue" from where it left off; depending on what your application does.

Based on your comments, there is no guarantees when writing a file - especially if you lose power during a file write. Your best bet is to put the PC on an Uninterruptable Power Supply. If you are able to create an auto-restore mechanism, like Microsoft Office products, then that would prevent complete loss of data but it won't fix the missing data upon power loss.

I would consider this a case of a fatal exception (sudden loss of power). There isn't anything you can do about it, and generally, trying to handle them only makes matters worse.

I have had to deal with something similar; essentially an embedded system running on Windows, where the expectation is that the power might be shut off at any time.

In practice, I work with the understanding that a file written to disk less than 10 seconds before loss-of-power means that the file will be corrupted. (I use 30 seconds in my code to play it safe).

I am not aware of any way of guaranteeing from code that a file has been fully closed, flushed to disk, and that the disk hardware has finalized its writes. Except to know that 10 (or 30) seconds has elapsed. It's not a very satisfying situation, but there it is.

Here are some pointers I have used in a real-life embedded project...

  • Use a system of checksums and backup files.
  • Checksums: at the end of any file you write, include a checksum (if it's a custom XML file then perhaps include a <checksum.../> tag of some sort). Then upon reading, if the checksum tag isn't there, or doesn't match the data, then you must reject the file as corrupt.
  • Backups: every time you write a file, save a copy to one of two backups; say A and B. If A exists on disk but is less than 30 seconds old, then copy to B instead. Then upon reading, read the original file first. If corrupt, then read A, if corrupt then read B.

Also

  • If it is an embedded system, you need to run the DOS command "chkdsk /F" on the drive you do your writes to, upon boot. Because if you are getting corrupted files, then you are also going to be getting a corrupted file system.
  • NTFS disk systems are meant to be more robust against errors than FAT32. But I believe that NTFS disks can also require more time to fully flush their data. I use FAT32 when I can.

Final thought: if you are really using an embedded system, under windows, you would do well to learn more about Windows Embedded, and the Enhanced Write Filter system.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM