简体   繁体   中英

ZeroMQ - subscriber is able to receive data from 1st publisher but does not receive from 2nd publisher which is up after a few loops

In following code. Publisher thread publish 5 messages then again new publisher socket is supposed to send data to subscriber but subscriber is in while(1) loop for recv() and never get message from 2nd publisher. How subscriber can connect to publisher 2 with some exception handling that subscriber 2 is trying to connect.

I tried with XPUB/XSUB , PUSH/PULL and also ZMQ_HEARTBEAT but no exception is caught. Also tried with "inproc://#1" instead of "tcp://127.0.0.1:5555" . Nothing worked

#include <future>
#include <iostream>
#include <string>
#include "zmq.hpp"
#include <string>
#include <strings.h>
#include <stdint.h>
#include <chrono>
#include <thread>
void PublisherThread(zmq::context_t* ctx){
    try{
        std::cout << "PublisherThread: " << std::endl;
        zmq::socket_t publisher(*ctx, ZMQ_PUB);
        publisher.bind("tcp://127.0.0.1:5555");
        int counter = 0;
        while (true){
            try{
                publisher.send(zmq::str_buffer("A"), zmq::send_flags::sndmore);
                publisher.send(zmq::str_buffer("Message in A envelope\n"));
                std::this_thread::sleep_for(std::chrono::milliseconds(2000));
                publisher.send(zmq::str_buffer("B"), zmq::send_flags::sndmore);
                publisher.send(zmq::str_buffer("Message in B envelope\n"));
                std::this_thread::sleep_for(std::chrono::milliseconds(2000));
                publisher.send(zmq::str_buffer("C"), zmq::send_flags::sndmore);
                publisher.send(zmq::str_buffer("Message in C envelope\n"));
                std::this_thread::sleep_for(std::chrono::milliseconds(2000));
                if(counter == 5){
                    publisher.close();
                    std::this_thread::sleep_for(std::chrono::milliseconds(2000));
                    counter = 0;
                    break;
                }
                else{
                    counter++;
                }
            }
            catch(const zmq::error_t& ze){
                std::cout<<"PublisherThread: catch 2:"<<ze.what()<<std::endl;
            }
        }
    }
    catch(const zmq::error_t& ze){
        std::cout<<"PublisherThread: catch 1:"<<ze.what()<<std::endl;
    }
    try{
        zmq::socket_t publisher2(*ctx, ZMQ_PUB);
        publisher2.bind("tcp://127.0.0.1:5555");
        int counter = 0;
        while (true){
            try{
            //  Write three messages, each with an envelope and content
            publisher2.send(zmq::str_buffer("A"), zmq::send_flags::sndmore);
                publisher2.send(zmq::str_buffer("Message in A envelope\n"));
                std::this_thread::sleep_for(std::chrono::milliseconds(2000));
                publisher2.send(zmq::str_buffer("B"), zmq::send_flags::sndmore);
                publisher2.send(zmq::str_buffer("Message in B envelope\n"));
                std::this_thread::sleep_for(std::chrono::milliseconds(2000));
                publisher2.send(zmq::str_buffer("C"), zmq::send_flags::sndmore);
                publisher2.send(zmq::str_buffer("Message in C envelope\n"));
                std::this_thread::sleep_for(std::chrono::milliseconds(2000));
                if(counter == 50){
                    publisher2.close();
                    break;
                }
                else{
                    counter++;
                }
            }
            catch(const zmq::error_t& ze){
                std::cout<<"PublisherThread: catch 4:"<<ze.what()<<std::endl;
            }
        }
    }
    catch(const zmq::error_t& ze){
        std::cout<<"PublisherThread: catch 3:"<<ze.what()<<std::endl;
    }
    std::cout<<"PublisherThread: exiting:"<<std::endl;
void SubscriberThread1(zmq::context_t* ctx){
    std::cout<< "SubscriberThread1: " << std::endl;
    zmq::socket_t subscriber(*ctx, ZMQ_SUB);
    subscriber.setsockopt(ZMQ_SUBSCRIBE, "A", 1);
    subscriber.setsockopt(ZMQ_SUBSCRIBE, "B", 1);
    subscriber.connect("tcp://127.0.0.1:5555");
    while (1){
        try{
            zmq::message_t address;
            zmq::recv_result_t result = subscriber.recv(address);
            //  Read message contents
            zmq::message_t contents;
            result = subscriber.recv(contents);
            std::cout<< "Thread2: "<< std::string(static_cast<char*>(contents.data()), contents.size())<< std::endl;
        }
        catch(const zmq::error_t& ze){
            std::cout<<"subscriber catch error:"<<ze.what()<<std::endl;
        }
    }
}
int main(){
    zmq::context_t* zmq_ctx = new zmq::context_t();
    std::thread thread1(PublisherThread, zmq_ctx);
    std::thread thread2(SubscriberThread1, zmq_ctx);
    thread1.join();
    thread2.join();
}

Q:
" How subscriber can connect to publisher 2 with some exception handling that subscriber 2 is trying to connect. "

A:
The as-is code does serially chain {_a_try_scope_of_PUB_in_an_infinite_loop_} only after which is break -broken or otherwise terminated and exited from, the same thread1 -code follows to the other one, having left the former, now entering {_a_try_scope_of_PUB2_again_into_another_infinite_loop_} .

Just this fact self-explains, why a subscriber-side SUB-end never receives a single message from the second publisher, if a ZeroMQ configuration property addressed as ZMQ_LINGER silently blocks the progress of closing a first PUB -archetype socket instance, still owned by the publisher ( this LINGER -attribute was in some native-API versions documented to have a default value such that the release will never happen, if undelivered messages were still present inside the internal queue, in some cases even blocking the IP:PORT "beyond" a code exit, leaving us only to use a hardware reboot as a strategy of last resort to release a such hanging Context() -instance from infinite occupation of that port - was in python code under pyzmq-language wrapper for ZeroMQ native-API DLL, all run in Windows. Definitely not an experience one would like to have a single more time. So in newer versions of native-API, the LINGER started to have another, non-infinite waiting default value - so version dependence has to be best overcome with explicit setting the LINGER property, right after the socket-instance gets created - a sign of good engineering practice ).

Looking onto an option to let each one work from a pair of autonomous threads - a threadP1 + another PUB inside a threadP2 :

int main(){
    zmq::context_t* zmq_ctx = new zmq::context_t();

    std::thread threadP1( PubThread1, zmq_ctx );
    std::thread threadP2( PubThread2, zmq_ctx );

    std::thread threadS1( SubThread1, zmq_ctx );

    threadP1.join();
    threadP2.join();
    threadS1.join();
}
  • here we have indeed both of the PUB -sides independently entering into their own try{...} -scope and trying to proceed into there present infinite while(){...} -loop ( ignoring the hidden break -branch there for the moment ), yet with a new problem here...
#define TAKE_A_NAP std::this_thread::sleep_for( std::chrono::milliseconds( 2000 ) )

void PubThread1( zmq::context_t* ctx ){
    try{
                                 std::cout << "PubThread1: " << std::endl;
        zmq::socket_t publisher( *ctx, ZMQ_PUB );
     // publisher.setsockopt(          ZMQ_LINGER, 0 );   // property settings
        publisher.bind( "tcp://127.0.0.1:5555" );         // resource .bind()

        int counter = 0;
        while ( true ){
            try{
                publisher.send( zmq::str_buffer( "A" ), 
                                zmq::send_flags::sndmore );
                publisher.send( zmq::str_buffer( "Msg in envelope A from PUB1\n" ) );
                                TAKE_A_NAP;
                publisher.send( zmq::str_buffer( "B" ),
                                zmq::send_flags::sndmore );
                publisher.send( zmq::str_buffer( "Msg in envelope B from PUB1\n" ) );
                                TAKE_A_NAP;
                publisher.send( zmq::str_buffer( "C" ),
                                zmq::send_flags::sndmore );
                publisher.send( zmq::str_buffer( "Msg in envelope C from PUB1\n" ) );
                                
                if ( counter > 4 ){
                     counter = 0;
                     publisher.close();
                                TAKE_A_NAP;
                     break;
                }
            }
            catch( const zmq::error_t& ze ){
                                std::cout << "PubThread1: catch 2:"<<ze.what()<<std::endl;
            }
        }
    }
    catch( const zmq::error_t& ze ){
                                std::cout << "PubThread1: catch 1:"<<ze.what()<<std::endl;
          /*    publisher.close(); /* a fair place
                                          to dismantle all resources here
                                          to avoid a hanging instance ALAP */
    }

We need to somehow solve a new, only now appearing, colliding attempts to .bind() -acquire one and the same resource twice, which for obvious reasons cannot happen - so the "slower" of the pair will fail to also .bind("tcp://127.0.0.1:5555") onto an already occupied ADDRESS:PORT -resource, the faster one has already managed to acquire and use.

Using different, non-colliding ADDRESS:PORT resources is one way, using a reversed bind/connect scenario is the other.

There the SUB.bind() -s and both of the { PUB1 | PUB2 }.connect() { PUB1 | PUB2 }.connect() to a single, publicly known SUB -AccessPoint's ADDRESS:PORT used for the SUB -side.

Initial ZeroMQ versions were using SUB -side topic-filtering, meaning all messages were delivered "across".network to all SUB -s, where the topic-filter was applied after .network"-side delivery, to decide, if a particular match was found, dropping a message if not. More recent versions ( having larger RAMs and multi-core CPUs ) started to process topic-filtering ( and also all the subscription-management ) on the PUB -side. This increases the processing & RAM overheads on the PUB -side, and may run us into problems, what happens if properly subscribed to PUB1 gets closed ( throwing away all subscription-management it carried on ) and leaving us in doubts, if newly delivered PUB2 , once properly .bind() , occupying the same ADDRESS:PORT location, will somehow receive the subscription-requests repeated from a SUB -side, in the newer API mode. Details & version changes indeed matter here. Best to walk on the safer side - so the old assembler practitioners' #ASSUME NOTHING directive serves best if obeyed here )

Last, but not least, a handy notice. The PUB/SUB -archetype works with topic filtering in such a way, that does not require going into multipart-message composition's complexities. The topic-filter is a pure left-to-right ASCII-matching, so using the filter simply checks left-to-right the beginning of the message, be it a multipart-message composed of { "A" | "...whatever.text.here..." } { "A" | "...whatever.text.here..." } or a plain-message "A...whatever..." , "ABC...whaterver..." , "B884884848484848484..." , "C<lf><lf><0xD0D0><cr>" and likes - simplicity helps performance and was always a part of the Zen-of-Zero.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM