简体   繁体   English

C磁盘I / O - 在读取文件的相同偏移量后写入将使读取吞吐量非常低

[英]C Disk I/O - write after read at the same offset of a file will make read throughput very low

Background: 背景:

I'm developing a database related program, and I need to flush dirty metadata from memory to disk sequentially. 我正在开发一个与数据库相关的程序,我需要按顺序将脏元数据从内存刷新到磁盘。 /dev/sda1 is volumn format, so data on /dev/sda1 will be accessed block by block and the blocks are adjacent physically if accessed sequentially. / dev / sda1是volumn格式,因此将逐块访问/ dev / sda1上的数据,如果按顺序访问,则块在物理上相邻。 And I use direct I/O, so the I/O will bypass the caching mechanism of the file system and access directly the blocks on the disk. 我使用直接I / O,因此I / O将绕过文件系统的缓存机制并直接访问磁盘上的块。

Problems: 问题:

After opening /dev/sda1, I'll read one block, update the block and write the block back to the same offset from the beginning of /dev/sda1, iteratively. 在打开/ dev / sda1之后,我将读取一个块,更新块并将块重新写回/ dev / sda1开头的相同偏移量。

The code are like below - 代码如下 -

//block_size = 256KB
int file = open("/dev/sda1", O_RDWR|O_LARGEFILE|O_DIRECT);
for(int i=0; i<N; i++) {
    pread(file, buffer, block_size, i*block_size);
    // Update the buffer
    pwrite(file, buffer, block_size, i*block_size);
}

I found that if I don't do pwrite, read throughput is 125 MB/s . 我发现如果我不做pwrite,读取吞吐量是125 MB / s

If I do pwrite, read throughput will be 21 MB/s , and write throughput is 169 MB/s . 如果我执行pwrite,读取吞吐量将为21 MB / s ,写入吞吐量为169 MB / s

If I do pread after pwrite, write throughput is 115 MB/s , and read throughput is 208 MB/s . 如果我在pwrite之后pread,写吞吐量是115 MB / s ,读吞吐量是208 MB / s

I also tried read()/write() and aio_read()/aio_write(), but the problem remains. 我也试过read()/ write()和aio_read()/ aio_write(),但问题仍然存在。 I don't know why write after read at the same position of a file will make the read throughput so low. 我不知道为什么在读取文件的相同位置后写入会使读取吞吐量如此之低。

If accessing more blocks at a time, like this 如果一次访问更多的块,就像这样

pread(file, buffer, num_blocks * block_size, i*block_size);

The problem will mitigate, please see the chart . 问题会缓解,请参阅图表

And I use direct I/O, so the I/O will bypass the caching mechanism of the file system and access directly the blocks on the disk. 我使用直接I / O,因此I / O将绕过文件系统的缓存机制并直接访问磁盘上的块。

If you don't have file system on the device and directly using the device to read/write, then there is no file system cache comes into the picture. 如果设备上没有文件系统并直接使用设备进行读/写,则图片中没有文件系统缓存。

The behavior you observed is typical of disk access and IO behavior. 您观察到的行为是典型的磁盘访问和IO行为。

I found that if I don't do pwrite, read throughput is 125 MB/s 我发现如果我不做pwrite,读取吞吐量是125 MB / s

Reason: The disk just reads data, it doesn't have to go back to the offset and write data, 1 less operation. 原因:磁盘只是读取数据,它不必返回偏移量并写入数据,减少1次操作。

If I do pwrite, read throughput will be 21 MB/s, and write throughput is 169 MB/s. 如果我执行pwrite,读取吞吐量将为21 MB / s,写入吞吐量为169 MB / s。

Reason: Your disk might have better write speed, probably disk buffer is caching write rather than directly hitting the media. 原因:您的磁盘可能具有更好的写入速度,可能是磁盘缓冲区正在缓存写入而不是直接命中介质。

If I do pread after pwrite, write throughput is 115 MB/s, and read throughput is 208 MB/s. 如果我在pwrite之后pread,写吞吐量是115 MB / s,读吞吐量是208 MB / s。

Reason: Most likely data written is being cached at disk level and so read gets data from cache instead of media. 原因:最有可能写入的数据是在磁盘级别缓存的,因此读取数据来自缓存而不是媒体。

To get optimal performance, you should use asynchronous IOs and number of blocks at a time. 要获得最佳性能,您应该一次使用异步IO和块数。 However, you have to use reasonable number of blocks and can't use very large number. 但是,您必须使用合理数量的块,并且不能使用非常大的数量。 Should find out what is optimal by trial and error. 应该通过反复试验找出最佳的。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM