简体   繁体   中英

Problem coordinating Node Red replicas on Kubernetes accessing IBM Watson Assistant

I have some problem with IBM Watson Assistant. I created 1 node-red container with 2 replicas on Kubernetes (so I have 2 node-red container). Inside a node-red flow I access Watson Assistant.

There is a load balancer that handles the load between the two replicas but there is a problem: the conversation_id is different for the two replicas and it's like I have two open chats in one (i have 2 different context).

I don't understand how to have 1 only conversation_id with only 1 conext. There is a way to force the conversation_id with a custom id?

In my node-red logic there is nothing that serves to control the beginning of a conversation. I let Watson Assistant handle it and create the initial id.

When an app / client starts a conversation by contacting the Watson Assistant service, there is no conversation_id transferred as part of the message API call . In the response by Watson Assistant a conversation_id is included in the context object. The client then passes the context object back to Watson Assistant with each message call. All the communication is stateless and works in high availability apps that use multiple replicas. Often, the conversation context is persisted by the app / client and thereby made available to all replicas.

To me it seems that you have two replicas of a flow, but no logic to handle a common context. How do you identify different users and map them to a conversation? How do both replicas know about the ongoing conversation? By default, that state is maintained in memory. You would need to add a database, store the context and look up existing conversations before starting anything new.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM