简体   繁体   English

处理USN日记本完整大小写

[英]Handle USN journal size full case

In my backup application I am using USN journal to check changes to the volume. 在备份应用程序中,我正在使用USN日志来检查卷的更改。 In microsoft website it mentioned like USN has a maximum size and the file gets full records gets deleted . Microsoft网站中,它提到USN具有最大大小,并且该文件获取的完整记录被删除

MaximumSize is the target maximum size for the change journal in bytes. MaximumSize是更改日志的目标最大大小(以字节为单位)。 The change journal can grow larger than this value, but at NTFS file system checkpoints the NTFS file system examines the journal and trims it when its size exceeds the value of MaximumSize plus the value of AllocationDelta. 更改日志的长度可以大于此值,但是在NTFS文件系统的检查点,NTFS文件系统将检查日志并在其大小超过MaximumSize值加上AllocationDelta值时对其进行修剪。 (At NTFS file system checkpoints, the operating system writes records to the NTFS file system log file that allow the NTFS file system to determine what processing is required to recover from a failure.) (在NTFS文件系统检查点,操作系统将记录写入NTFS文件系统日志文件,以使NTFS文件系统能够确定从故障中恢复所需的处理。)

So what does actually happen when journal is full? 那么,当日记满时,实际发生了什么? Do all record gets deleted? 是否所有记录都被删除? or all only it will delete oldest record and make a entry for new? 还是全部删除旧记录并输入新记录? How can i handle usn journal size full case? 我该如何处理usn日记本大小的完整案例?

The USN journal is a sparse file, and the USNumbers themselves are indexes into this file...actual offsets. USN日志是一个稀疏文件,而USNumbers本身是该文件的索引...实际偏移量。 But, the trick is, in a sparse memory mapped file, when it exceeds its size threshold, it removes the earliest entries. 但是,诀窍在于,在稀疏的内存映射文件中,当文件大小超过其大小阈值时,它将删除最早的条目。 This is the magic of sparse files. 这是稀疏文件的魔力。 The offsets don't ever have to change because early records got chopped off. 偏移量不必更改,因为早期的记录被砍掉了。 NTFS keeps metadata about the zeroed-out ranges and transparently outputs zeros to clients reading the file. NTFS保留有关清零范围的元数据,并透明地向读取文件的客户端输出零。 Its a rolling log. 它的滚动日志。

The unit of work for zeroing-out is the AllocationDelta. 归零的工作单位是AllocationDelta。 Every time this zeroing out occurs, NTFS then sets a new LowestValidUsn value. 每次清零发生时,NTFS都会设置一个新的LowestValidUsn值。

So, when you do a backup, you would want to record the NextUsn...which is a pointer to where the next USN is going to get written. 因此,当您进行备份时,您需要记录NextUsn ...,这是指向下一个USN写入位置的指针。 Then later, when you do a subsequent backup, and your saved NextUsn is greater than the LowestValidUsn, then all the changes since your last backup are all still there, and you can rely on the USN to optimize your backup process. 然后,稍后,当您进行后续备份,并且保存的NextUsn大于LowestValidUsn时,自上次备份以来的所有更改都仍然存在,您可以依靠USN来优化备份过程。

If the USN actually overflows the MaxUsn, I'm not sure what actually happens. 如果USN实际溢出MaxUsn,我不确定实际发生了什么。 Seems awfully unlikely - and worth knowing what could bring that on. 似乎极不可能-并且值得知道可能会带来什么。 Seems like I've read conflicting accounts of what actually occurs - either journaling stops - or NTFS just resets the journal cold. 好像我已经读到了关于实际发生的冲突帐户-日记停止-或NTFS只是重置日记冷。

If the journal gets reset by the admin, or automatically recreated, NTFS assigns a new ID to the journal. 如果日记由管理员重置或自动重新创建,则NTFS会为日记分配新的ID。 In such a case, a backup program has to proceed from fresh reads of the whole volume. 在这种情况下,备份程序必须从重新读取整个卷开始。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM