简体   繁体   中英

How can I call a method in a running Perl process from another process?

Ok, so, I got the signal through but for some reason the process exists after it receives the signal.

If I add an endless loop (while(1) ) before I even create the socket then it works as prescribed. So... something in my socket code is quitting when the kill command is issued.

I don't see what it would be though. Without the kill, the process sits there indefinitely accepting connections and sending messages to the clients. Why would the kill (and the incrementation of the variable that follows) provoke the socket to get out of it's loop and let the process end?

The socket code is beneath...

[EDITED again]

$SIGNAL = 0;
        sub sigHandler{ 
            #&logData("SIGNALED");
            $SIGNAL++ ;
        }
        $SIG{"USR1"}=\&sigHandler;



        # Create a new socket, on port 9999
        my $PORT = 9999;
        print ("opening connection on port $PORT");
        $lsn = new IO::Socket::INET(Listen => 1, 
                                    LocalPort => $PORT,
                                    Reuse => 1,
                                    Proto => 'tcp' );
           #or die ("Couldn't start server: $!");



        # Create an IO::Select handler
        $sel = new IO::Select( $lsn );

        # Close filehandles

        close(STDIN); close(STDOUT);

        warn "Server ready.  Waiting for connections . . .  on \n";


        # Enter into while loop, listening to the handles that are available.
        # this SHOULD be an infinite loop... I don't see why it would eval to false when 
        # I send a signal to increment $SIGNAL by one.
        while( @read_ready = $sel->can_read ) {
            $MESSAGE = 0;
            $fh  = $read_ready[0];

                # Create a new socket
                if($fh == $lsn) {
                    $new = $lsn->accept;
                    $sel->add($new);
                    push( @data, fileno($new) . " has joined.");
                    warn "Connection from " . $new->peerhost . ".\n";

                }

                # Handle connection
                else {

                    $input = <$fh>;
                    chomp $input;
                    warn "GOT INPUT '$input'\n";

                    if($input eq "<policy-file-request/>"){
                        $MESSAGE = 
                            qq~<?xml version="1.0"?>
                            <cross-domain-policy>
                              <allow-access-from domain="*" to-ports="*"/>
                            </cross-domain-policy>\0~;
                        $SIGNAL++;
                    }


                    if ( $input eq '') {#disconnection notification by client

                        warn "Disconnection from " . $new->peerhost . ".\n";                
                        $sel->remove($fh);
                        $fh->close;
                    }
                    if ( $input eq 'READY'){
                        warn "CLIENT READY = 1\n";
                        $CLIENT_READY = 1;
                    }

                }



            # Write to the clients that are available
            foreach $fh ( @write_ready = $sel->can_write(0) ) {
                if($MESSAGE == 0){
                    #set message here based on criteria
                    $MESSAGE = "UPDATE";
                }
                warn "outside send if\n";
                if($CLIENT_READY == 1 && $SIGNAL > 0){
                    warn ("sending $MESSAGE to $fh\n");         
                    $CLIENT_READY = 0;          
                    $SIGNAL--;
                    print $fh "$MESSAGE\0" or warn "can't send message to $fh";  
                }             
            }
        }


        warn "Server ended.\n";

I don't understand if I'm misreading your question or if you're overlooking the obvious. I wouldn't rely on signals for doing something like this.

I'd have scriptA listening to a socket and have scriptB send a message to scriptA (rather than a signal). If it receives the right message, it would write out the relevant data to all the clients connected to it.

If you install a signal handler in your perl script, it does not have to end the script. (Do not exit the handler by calling die ) Note that you do not need to send SIGINT or SIGKILL, you can also send SIGUSR1.

Edit The commandline to your kill command has a comma behind -USR1 that should not be there, (it is kill -USR1 4169)

Edit 2 The while (can_read) loop probably exists when it receives an empty array of file handles when it is interrupted by the signal. You could prevent this by having the condition:

while ((@read_ready = $sel->can_read) || 0 < $sel->count ) {

and update the loop to handle an empty @ready array.

Just to expand on the previous answers/comments. You can write a handler for SIGUSR1/2 to catch external signals and run corresponding subroutines in your script. All scriptB has to do is call "kill -SIGUSR[12] {pidof scriptA}" to invoke the handler in scriptA.

A sample signal handler would be something like -

#!/usr/bin/perl
use strict;
use warnings;

$SIG{USR1} = sub { print "Caught USR1\n"; };
$SIG{USR2} = sub { print "Caught USR2\n"; };

while (sleep 5) {}

Make sure that you do not have a die/exit call in your signal handling subroutines so that the program can continue uninterrupted after the call is handled.

[EDIT] After taking a second look at the question I think that Noufal's approach would be a better option since scriptA already has a socket open. Why don't you get scriptB to communicate with scriptA over the socket and redirect the control flow based on your needs.

[EDIT2] I had a look at the underlying IO::Select code for can_read and apparently there is a call to select() to get the read-ready file descriptors. In your case since the timeout value is undef, the select call blocks. According to the documentation on perldoc, select getting restarted after a signal handler is implementation dependent. In your case(and mine) the select() call is 'not' restarted. Hence an empty array is returned to the while condition check resulting in the server exiting.

A quick hack around this problem is -

while ((@read_ready = $sel->can_read) || 1) {
    next if @read_ready == 0;
    ...
}

This is the point where I'd consider scrapping everything and rethinking the design flow.

By using signals, you could introduce some subtle bugs depending on what scriptA is doing when the signal is delivered. I would recommend, instead, that you add a check in the scriptA mainloop to that looks for communication from scriptB .

I would further recommend that this communication take the form of a named semaphore that both scripts open. scriptB can post the semaphore when appropriate, and scriptA can perform a non-blocking $sem->trywait() at some point in its loop. See POSIX::RT::Semaphore for more information on the exact syntax.

Using this method, scriptA gets to decide when it is appropriate to handle the instruction from scriptB , and you avoid all the nasty issues that can arise from subroutines being interrupted by asynchronous signal delivery.

For more signals, and the subtleties thereof, check out the Perl Cookbook and Programming Perl .

Although this may not be a part of your requirements, if you wanted to call multiple methods, you could setup a remote procedure call (RPC) interface. Something like Event::RPC , but that may be overkill.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM