简体   繁体   English

改进ofstream的性能?

[英]Improving performance of ofstream?

I am communicating with some parallel processes using FIFOs. 我正在使用FIFO与一些并行进程进行通信。 I am reading the pipe with read(). 我正在用read()读取管道。 And I am writing to the named pipe by doing this: 我正在通过这样做写到命名管道:

ofstream pipe(namepipe);

pipe << data << endl;
pipe.close();

I have been noticing that the performance is horrible though! 我一直注意到,虽然表现很糟糕! It takes like 40ms sometimes. 有时大约需要40毫秒。 It's an extreme latency in my opinion. 我认为这是一个极端的延迟。 I read that the use of std::endl can affect performance. 我读到std :: endl的使用会影响性能。 Should I avoid using endl? 我应该避免使用endl吗?

Does using ofstream affect performance? 使用ofstream是否会影响性能? Are there any other alternatives to this method? 此方法还有其他替代方法吗?

Thank you! 谢谢!

A cheap hack: 便宜的骇客:

std::ios::sync_with_stdio(false);

Note Use this only if you are not going to be mixing c IO with c++ 注意 仅在不打算将c IO与c++混合使用时才使用此选项

The reason std::endl might affect i/o performance is because it flushes the stream. std::endl可能会影响I / O性能的原因是因为它刷新了流。 So to avoid this, you should use '\\n' 因此,为避免这种情况,您应该使用'\\n'

Avoiding having to open and close multiple streams will also help 避免必须打开和关闭多个流也将有所帮助

When working with large files with fstream , make sure to use a stream buffer and don't use endl ( endl flushes the output stream). 当使用fstream处理大型文件时,请确保使用流缓冲区,而不要使用endlendl刷新输出流)。

At least the MSVC implementation copies 1 char at a time to the filebuf when no buffer was set (see streambuf::xsputn() ), which can make your application CPU-bound, which will result in lower I/O rates. 如果未设置缓冲区( streambuf::xsputn()请参见streambuf::xsputn() ),则至少MSVC实现一次一次1个char复制到filebuf ,这会使您的应用程序受CPU限制,从而导致较低的I / O速率。

So, try adding this to your code before doing the writing: 因此,尝试在编写之前将其添加到您的代码中:

const size_t bufsize = 256*1024;
char buf[bufsize];
mystream.rdbuf()->pubsetbuf(buf, bufsize);

NB: You can find a complete sample application here . 注意:您可以在此处找到完整的示例应用程序。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM