简体   繁体   English

如何从多个分布式进程中并行将行写入日志文件

[英]How to write lines into a logfile from within multiple distributed processes in parallel

I tried boost::log but I'm not getting anything in the file at all.我尝试了 boost::log 但我根本没有在文件中得到任何东西。 In fact the file is not even being created.事实上,该文件甚至没有被创建。

Again: The point is to have many client processes distributed over the network writing messages into the same file.再说一遍:重点是让许多客户端进程分布在网络上,将消息写入同一个文件。

And I've no use for all these attributes and sinks and sources and filters -- I don't even know what they are intended for.而且我对所有这些属性、接收器、源和过滤器都没有用处——我什至不知道它们的用途。 In fact I would prefer a simple constructor and a streaming operator.事实上,我更喜欢一个简单的构造函数和一个流操作符。

Here is the current code producing no output at all:这是当前完全没有产生 output 的代码:

#include <boost/log/core.hpp>
#include <boost/log/trivial.hpp>
#include <boost/log/expressions.hpp>
#include <boost/log/sinks/text_file_backend.hpp>
#include <boost/log/utility/setup/file.hpp>
#include <boost/log/utility/setup/common_attributes.hpp>
#include <boost/log/sources/severity_logger.hpp>
#include <boost/log/sources/record_ostream.hpp>


static int initFileLogging(void)
{       namespace logging = boost::log;
        namespace src = boost::log::sources;
        namespace sinks = boost::log::sinks;
        namespace keywords = boost::log::keywords;
        logging::add_file_log("~/smc.log");
        logging::core::get()->set_filter(logging::trivial::severity >= logging::trivial::info);
        logging::add_common_attributes();
        return 0;
}
int main(int, char**)
{
        initFileLogging();

        namespace logging = boost::log;
        namespace src = boost::log::sources;
        using namespace logging::trivial;
        src::severity_logger< severity_level > lg;
        BOOST_LOG_SEV(lg, debug) << "test" << std::endl;
}

I tried boost::log but I'm not getting anything in the file at all.我尝试了 boost::log 但我根本没有在文件中得到任何东西。 In fact the file is not even being created.事实上,该文件甚至没有被创建。

In the sample code piece, you set up a global filter that only passes log records with severity level trivial::info or higher, but the log record you emit has level trivial::debug , which is lower.在示例代码段中,您设置了一个全局过滤器,它仅通过严重级别为trivial::info或更高的日志记录,但您发出的日志记录的级别为trivial::debug ,该级别较低。 The log record gets discarded, and since no log record gets to the file sink backend, the file is not created.日志记录被丢弃,并且由于没有日志记录到达文件接收器后端,因此不会创建文件。

And I've no use for all these attributes and sinks and sources and filters而且我对所有这些属性、接收器、源和过滤器都没有用处

Apparently, you do, since you're using a severity level, which is an attribute, and a filter and a file sink.显然,您这样做了,因为您使用的是严重性级别,这是一个属性,以及一个过滤器和一个文件接收器。 You probably will be using a different sink type as well in the final solution.您可能会在最终解决方案中使用不同的接收器类型。

You should really read the Design section in the library documentation to better understand how the library works in order to use it efficiently.您应该真正阅读库文档中的设计部分,以更好地了解库是如何工作的,以便有效地使用它。

How to write lines into a logfile from within multiple distributed processes in parallel如何从多个分布式进程中并行将行写入日志文件

The point is to have many client processes distributed over the network writing messages into the same file.关键是让许多客户端进程分布在网络上,将消息写入同一个文件。

If you truly have multiple processes on different machines in a network, then there is no built-in solution in Boost.Log.如果您确实在网络中的不同机器上有多个进程,那么 Boost.Log 中没有内置解决方案。 If you're working on a UNIX-like system, you most likely have access to a syslog service, and Boost.Log can generate syslog messages with a syslog sink backend .如果您正在使用类似 UNIX 的系统,您很可能可以访问 syslog 服务,并且 Boost.Log 可以使用syslog sink backend生成 syslog 消息。 You could configure the syslog service on all your machines to transmit messages to a common server that will write them in a common log file.您可以在所有机器上配置 syslog 服务,以将消息传输到一个公共服务器,该服务器会将它们写入一个公共日志文件。 Here, the syslog service implements the network communication and file writing parts.在这里,syslog 服务实现了网络通信和文件写入部分。

Alternatively, you could create a TCP iostream using Boost.ASIO and use it with the ostream sink backend in Boost.Log.或者,您可以使用 Boost.ASIO 创建TCP iostream ,并将其与 Boost.Log 中的ostream sink 后端一起使用。 The TCP iostream would have to connect to the common server that will receive the formatted log records and write them to a file. TCP iostream 必须连接到将接收格式化日志记录并将它们写入文件的公共服务器。 You will have to write the server yourself though.不过,您必须自己编写服务器。

If your multiple processes are going to run on the same machine (ie not distributed over the network) then you could use IPC message queue backend to pass log records to a common process that will be writing the log file.如果您的多个进程将在同一台机器上运行(即不分布在网络上),那么您可以使用IPC 消息队列后端将日志记录传递给将写入日志文件的通用进程。 This backend uses IPC message queue that is implemented using shared memory and may be more efficient than sockets.此后端使用使用共享 memory 实现的IPC 消息队列,并且可能比 sockets 更有效。 You will have to implement the server process yourself in this case as well, there is an example in the docs .在这种情况下,您也必须自己实现服务器进程,文档中有一个示例

If none of this works, you could always implement your own sink backend .如果这些都不起作用,您始终可以实现自己的接收器后端

In any case, you can see a common theme in all suggestions.无论如何,您可以在所有建议中看到一个共同的主题。 You need to have a common process that will be writing the log file, and the client processes that will pass their log records to that common server.您需要有一个将写入日志文件的公共进程,以及将其日志记录传递到该公共服务器的客户端进程。 The question is only which transport to use and how to implement it.问题只是要使用哪种传输方式以及如何实现它。

You may be tempted to avoid using message passing design and instead use file locking to synchronize concurrent accesses of multiple processes to a common file (which, presumably, would be mounted in a shared folder on each client machine).您可能很想避免使用消息传递设计,而是使用文件锁定来同步多个进程对公共文件的并发访问(可能会安装在每台客户端计算机上的共享文件夹中)。 Firstly, I should note that Boost.Log does not implement file locking, so using file sinks available out of the box on a common file like that will not work as desired.首先,我应该注意 Boost.Log 没有实现文件锁定,因此在像这样的普通文件上使用开箱即用的文件接收器将无法正常工作。 Next, I would advise against such approach anyway because (a) file locking may not work reliably over a network share and (b) file locking creates a contention point across all your client processes, which will affect scalability.接下来,我建议不要使用这种方法,因为 (a) 文件锁定可能无法通过网络共享可靠地工作,并且 (b) 文件锁定会在所有客户端进程中创建一个争用点,这将影响可伸缩性。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM