[英]Kafka-python producer.send isn't being recieved in a try-except block, but does send with time.sleep(1)
I'm testing a script that runs binwalk on a file and then sends a kafka message to let the sending file know that it was completed or if it failed.我正在测试一个脚本,该脚本在文件上运行 binwalk,然后发送一条 kafka 消息,让发送文件知道它已完成或失败。 It looks like this:它看起来像这样:
if __name__ == "__main__":
# finds the path of this file
scriptpath = os.path.dirname(inspect.getfile(inspect.currentframe()))
print(scriptpath)
# sets up kafka consumer on the binwalk topic and kafka producer for the bwsignature topic
consumer = KafkaConsumer('binwalk', bootstrap_servers=['localhost:9092'])
producer = KafkaProducer(bootstrap_servers = ['localhost:9092'])
# watches the binwalk kafka topic
for msg in consumer:
# load the json
job = json.loads(msg.value)
# get the filepath of the .bin
filepath = job["src"]
print(0)
try:
# runs the script
binwalkthedog(filepath, scriptpath)
# send a receipt
producer.send('bwsignature', b'accepted')
except:
producer.send('bwsignature', b'failed')
pass
producer.close()
consumer.close()
If I send in a file that doesn't give any errors in the 'binwalkthedog' function then it works fine, but if I give it a file that doesn't exist it prints a general error message and moves on to the next input, as it should.如果我发送的文件在“binwalkthedog”function 中没有给出任何错误,那么它工作正常,但如果我给它一个不存在的文件,它会打印一条一般错误消息并继续下一个输入,正如它应该。 For some reason, the producer.send('bwsignature', b'failed') doesn't send unless there's something that creates a delay after the binwalkthedog call fails like time.sleep(1) or a for loop that counts to a million.出于某种原因, producer.send('bwsignature', b'failed') 不会发送,除非在 binwalkthedog 调用失败后出现延迟,例如 time.sleep(1) 或计数为一百万的 for 循环.
Obviously I could keep that in place but it's really gross and I'm sure there's a better way to do this.显然我可以保留它,但它真的很恶心,我相信有更好的方法来做到这一点。
This is the temp script I'm using to send and recieve a signal from the binwalkthedog module:这是我用来发送和接收来自 binwalkthedog 模块的信号的临时脚本:
job = {
'src' : '/home/nick/Documents/summer-2021-intern-project/BinwalkModule/bo.bin',
'id' : 1
}
chomp = json.dumps(job).encode('ascii')
receipt = KafkaConsumer('bwsignature', bootstrap_servers=['localhost:9092'])
producer = KafkaProducer(bootstrap_servers = ['localhost:9092'])
future = producer.send('binwalk', chomp)
try:
record_metadata = future.get(timeout=10)
except KafkaError:
print("sucks")
pass
print(record_metadata.topic)
print(record_metadata.partition)
print(record_metadata.offset)
producer.close()
for msg in receipt:
print(msg.value)
break
Kafka producers batch many records together to reduce requests made to the server. Kafka 生产者将许多记录批处理在一起,以减少对服务器的请求。 If you want to force records to send, rather than introducing a blocking sleep call, or calling a get
on the future, you should use producer.flush()
如果你想强制发送记录,而不是引入阻塞睡眠调用,或者在未来调用get
,你应该使用producer.flush()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.