简体   繁体   English

为什么 ZeroMQ PUSH/PULL 可以工作,而不是 PUB/SUB?

[英]Why will ZeroMQ PUSH/PULL work, but not PUB/SUB?

Environment: NVIDIA-flavored Ubuntu 18.01 on their Jetson development board with their TX2i processor.环境:带有 TX2i 处理器的 Jetson 开发板上的 NVIDIA 风格的 Ubuntu 18.01。 ZMQ 4.3.2, utilizing the cppzmq C++ wrapper for ZMQ. ZMQ 4.3.2,使用ZMQ 的 cppzmq C++ 包装器。

I've got a slew of code running using google protocol buffers with ZeroMQ, and it's all PUSH/PULL, and it works fine except I've got one case that isn't point-to-point, but 1:3.我有大量使用带有 ZeroMQ 的谷歌协议缓冲区运行的代码,它们都是 PUSH/PULL,它工作正常,除了我有一个不是点对点而是 1:3 的情况。 The correct solution here is to do PUB/SUB, but I cannot get messages through to my subscriber.正确的解决方案是执行 PUB/SUB,但我无法将消息传递给我的订阅者。

I shaved my code down to this simple example.我将代码缩减为这个简单的示例。 If I uncomment the #define statements, the subscriber gets nothing.如果我取消注释#define语句,订阅者将一无所获。 Commented (which compiles as PUSH/PULL instead of PUB/SUB), then the subscriber gets the message as expected.注释(编译为 PUSH/PULL 而不是 PUB/SUB),然后订阅者按预期获得消息。 With the excessive sleep_for() times, I would expect the subscriber has ample time to be registered before the publisher performs the send.由于sleep_for()次数过多,我希望订阅者在发布者执行发送之前有足够的时间进行注册。

EDIT:编辑:

Why the try/catch on the subscriber?为什么要尝试/捕获订阅者? I was getting an exception early on, and believed it was because the publisher wasn't ready.我很早就收到了一个例外,我相信这是因为出版商还没有准备好。 This no longer appears to be the case, so it wasn't what I thought it was.这似乎不再是这种情况,所以这不是我想的那样。

// Publisher
#include "/usr/local/include/zmq.hpp"
#include "protobuf_namespace.pb.h"
#include <chrono>
#include <thread>


#define PUB_SUB

int main( void )
{
  zmq::context_t* m_pContext = new zmq::context_t( 1 );

#ifdef PUB_SUB
  zmq::socket_t*  m_pSocket  = new zmq::socket_t( *m_pContext, ZMQ_PUB );
#else
  zmq::socket_t*  m_pSocket  = new zmq::socket_t( *m_pContext, ZMQ_PUSH );
#endif

  std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
  //m_pSocket->bind( "tcp://*:53001" );       // using '*' or specific IP doesn't change result
  m_pSocket->bind( "tcp://127.0.0.1:53001" );
  std::this_thread::sleep_for( std::chrono::seconds( 1 ) );

  // Send the parameters
  protobuf_namespace::Params params;
  params.set_calibrationdata( protobuf_namespace::CalDataType::CAL_REQUESTED ); // init one value to non-zero
  std::string        params_str = params.SerializeAsString();
  zmq::message_t     zmsg( params_str.size() );

  memcpy( zmsg.data(), params_str.c_str(), params_str.size() );
  m_pSocket->send( zmsg, zmq::send_flags::none );

  std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
  m_pSocket->close();
  zmq_ctx_destroy( m_pContext );
}
// Subscriber - start me first!
#include "/usr/local/include/zmq.hpp"
#include "protobuf_namespace.pb.h"
#include <chrono>
#include <thread>
#include <stdio.h>

#define PUB_SUB


int main( void )
{
  zmq::context_t* m_pContext = new zmq::context_t( 1 );

#ifdef PUB_SUB
  zmq::socket_t*  m_pSocket  = new zmq::socket_t( *m_pContext, ZMQ_SUB );
  m_pSocket->connect( "tcp://127.0.0.1:53001" );

  int linger = 0;
  zmq_setsockopt( m_pSocket, ZMQ_LINGER, &linger, sizeof( linger ) );
  zmq_setsockopt( m_pSocket, ZMQ_SUBSCRIBE, "", 0 );
#else
  zmq::socket_t*  m_pSocket  = new zmq::socket_t( *m_pContext, ZMQ_PULL );
  m_pSocket->connect( "tcp://127.0.0.1:53001" );
#endif

  protobuf_namespace::Params params;
  zmq::message_t zmsg;
  bool retry = true;

  do {
    try {
      m_pSocket->recv( zmsg, zmq::recv_flags::none );
      retry = false;
      std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
    } catch( ... ) { 
      printf("caught\n");
    }
    std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
  } while( retry );

  std::string param_str( static_cast<char*>( zmsg.data() ), zmsg.size() );
  params.ParseFromString( param_str );

  if( params.calibrationdata() == protobuf_namespace::CalDataType::CAL_REQUESTED )
    printf( "CAL_REQUESTED\n" );
  else
    printf( "bad data\n" );


  std::this_thread::sleep_for( std::chrono::seconds( 1 ) );
  m_pSocket->close();
  zmq_ctx_destroy( m_pContext );
}

In case one has never worked with ZeroMQ,如果从未使用过 ZeroMQ,
one may here enjoy to first look at " ZeroMQ Principles in less than Five Seconds "在这里可以享受一下“ 不到五秒的ZeroMQ原则
before diving into further details在深入了解更多细节之前

Q : Why the try/catch on the subscriber?为什么要在订阅者上尝试/捕获?

Because :因为 :
a ) // Subscriber - start me first! a) // Subscriber - start me first! and yet having the PUB -side sleeping almost "forever" before doing the tcp:// transport-class path setup to accept any first .connect() in .bind() , here preceded by a massive nap ...然而,在进行tcp://传输类路径设置以接受.bind()任何第一个.connect()之前,让PUB端几乎“永远”睡觉,这里先是大睡……
std::this_thread::sleep_for( std::chrono::seconds( 1 ) ); and

b ) the try 'd m_pSocket->recv( zmsg, zmq::recv_flags::none ); b ) try 'd m_pSocket->recv( zmsg, zmq::recv_flags::none ); must by definition throw an exception, as there is no tcp:// transport-class path setup so far ( as the PUB -side has not returned yet from the sleep )必须根据定义抛出异常,因为到目前为止还没有tcp://传输类路径设置(因为PUB端尚未从 sleep 返回)

Q : Why will ZeroMQ PUSH/PULL work but not PUB/SUB?为什么 ZeroMQ PUSH/PULL 可以工作,但 PUB/SUB 不行?

Well, both will , if designed properly, respecting the published API.好吧,如果设计得当,两者都会尊重已发布的 API。

Just remove any blocking sleep() -s, preventing the SUB -s to join in, making the .connect() -s able to success ASAP.只需删除任何阻塞sleep() -s,防止SUB -s 加入,使.connect() -s 能够尽快成功。 Plus perhaps move into a non*blocking form of the .recv() -ops ( refactor try/catch ), as is common in to better reflect the nature of preventively designed .poll() -based or reactive .recv(..., ZMQ_NOBLOCK ) -based event-handling.另外,也许进入.recv() -ops 的非*阻塞形式(重构try/catch ),这在很常见,以更好地反映预防性设计的基于.poll()或反应性.recv(..., ZMQ_NOBLOCK )基于事件处理。


Last, but not least :最后但并非最不重要的 :

ZeroMQ v4+ ( as opposed to v2+ and pre-v3.? API ) was switched to use the PUB -side message filtering, so a due consideration of subscription management (timing/error-handling/resilience) has to take place either. ZeroMQ v4+(与 v2+ 和 pre-v3.? API 相对)已切换为使用PUB端消息过滤,因此必须适当考虑订阅管理(时间/错误处理/弹性)。

In any case of doubt, may integrate a use of ZeroMQ built-in socket_monitor tools, extending the Context() -instance, and trace / inspect each and every event, internal to the Context() -instance, well under the published API-events, down to the lowest Level-of-Detail.在任何有疑问的情况下,可以集成使用 ZeroMQ 内置socket_monitor工具,扩展Context() - socket_monitor ,并跟踪/检查每个事件,在Context()实例内部,在已发布的 API 下 -事件,直到最低的细节级别。

Do not hesitate to read and ask more不要犹豫, 阅读并询问更多

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM