简体   繁体   中英

IBM Watson Assistant: How to train the chatbot to pick the right intent?

While developing and testing the conversation, IBM Watson Assistant identifies multiple intents and respond to the one with highest confidence level. Sometimes I want it to respond to second intent not the first one because it is more relevant to the current conversation context. For example, if the dialogue contains nodes to handle making transfer or payment, during the transfer scenario the user can say execute which will match both execute transfer and execute payment. So I want Watson to always respond to execute transfer which is the current context even if it identifies execute payment with higher confidence.

So users ask generic questions assuming that the bot is aware about the current context and will reply accordingly.

For example, assume that I'm developing a FAQ bot to answer inquires about 2 programs Loyalty and Saving. For simplicity I'll assume there are 4 intents

(Loyality-Define - which has examples related to what is the loyalty program) (Loyality-Join - which has examples related to how to join loyalty program) (Saving-Define - which has examples related to what is the saving program) (Saving-Join - which has examples related to how to join saving program)

so users can start the conversation by utterance like "tell me about the loyalty program". then they will ask "how to join" (without mentioning the program assuming that the bot is aware). In that case Watson will identify 2 intents (Loyalty-Join, Saving-Join) and Saving-Join intent may have a higher confidence.

so I need to intercept the dialogue (may be be creating a parent node to check the context and based on that will filter-about the wrong intents).

I couldn't find a way to write code in the dialogue to check the context and modify the intents array so I want to ask about the best practice to do that.

You can't edit the intents object, so it makes what you want to do tricky but not impossible.

In your answer node, add a context variable like $topic . You fill this with a term that will denote the topic.

Then if the users question is not answered, you can check for the topic context and add that to a new context variable. This new variable is then picked up by the application layer to re-ask the question.

Example:

User: tell me about the loyalty program
WA-> Found #Loyality-Define
     Set $topic to "loyalty"
     Return answer. 

User: how to join
 WA-> No intent found. 
      $topic is not blank. 
      Set $reask to "$topic !! how to join"
APP-> $reask is set. 
      Ask question "loyalty !! how to join"
      Clear $reask and $topic
 WA-> Found #Loyalty-join
      $topic set to "loyalty"
      Return answer

Now in the last situation, if even with the loaded question it is not found, clearing the $topic stops it looping.

The other thing to be aware is that if a user changes topic you must either set the topic or clear it. To prevent it picking the old topics.

NOTE: The question was changed so it is technically a different question. Leaving previous answer below


You can use the intents[] object to analyse the returning the results.

So you can check the confidence difference between the first intent and second intent. If they fall inside a certain range, then you can take action.

Example condition:

intents[0] > 0.24 && intents.[1] - intents[0] > 0.05

This checks if two intents are within 5% of each other. The threshold of 0.24 is to ignore the second intent as it will likely fall below 0.2 which normally means the intent should not be actioned on.

You may want to play with this threshold.

Just to explain why you do this. Look at these two charts. The first one it's clear there is only one question asked. The second chart shows that the two intents are close together.

在此处输入图片说明 在此处输入图片说明


To take actual action, it's best to have a closed folder (condition = false ). In that folder you look for matching intents[1] . This will lower the complexity within the dialog.


If you want something more complex, you can do k-means at the application layer. Then pass back the second intent at the application layer to have the dialog logic take action. There is an example here .

Watson Assistant Plus also does this automatically with the Disambiguation feature.

You can train Watson Assistant to respond accordingly. In the tool where you work on the skill click on the User conversations page in the navigation bar. In the message overview you would need to identify those that have been answered incorrectly and then specify the correct intent. Watson Assistant will pick that up, retrain and then hopefully answer correctly.

In addition, you could revisit how you define the intents. Are the examples like the real user messages? Could you provide more variations? What are the conflicts that make Watson Assistant pick the one, but not the other intent?

Added:

If you want Watson Assistant to "know" about the context, you could extract the current intent and store it as topic in a context variable. Then, if the "join" intent is detected, switch to the dialog node based on intent "join" and the specific topic. For that I would recommend to either have only one intent for "join program" or if really needed, put details about the specifics into the intent. Likely there is not much difference and you end up with just one intent.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM