简体   繁体   English

IBM Watson Assistant:如何训练聊天机器人选择正确的意图?

[英]IBM Watson Assistant: How to train the chatbot to pick the right intent?

While developing and testing the conversation, IBM Watson Assistant identifies multiple intents and respond to the one with highest confidence level.在开发和测试对话时,IBM Watson Assistant 识别多个意图并响应具有最高置信度的意图。 Sometimes I want it to respond to second intent not the first one because it is more relevant to the current conversation context.有时我希望它响应第二个意图而不是第一个意图,因为它与当前的对话上下文更相关。 For example, if the dialogue contains nodes to handle making transfer or payment, during the transfer scenario the user can say execute which will match both execute transfer and execute payment.例如,如果对话包含处理转账或支付的节点,在转账场景中,用户可以说执行,这将匹配执行转账和执行支付。 So I want Watson to always respond to execute transfer which is the current context even if it identifies execute payment with higher confidence.所以我希望 Watson 始终响应执行转移,这是当前上下文,即使它以更高的信心识别执行付款。

So users ask generic questions assuming that the bot is aware about the current context and will reply accordingly.因此,假设 bot 了解当前上下文并会相应地回复,用户会提出一般性问题。

For example, assume that I'm developing a FAQ bot to answer inquires about 2 programs Loyalty and Saving.例如,假设我正在开发一个 FAQ bot 来回答关于 2 个程序 Loyalty 和 Saving 的询问。 For simplicity I'll assume there are 4 intents为简单起见,我假设有 4 个意图

(Loyality-Define - which has examples related to what is the loyalty program) (Loyality-Join - which has examples related to how to join loyalty program) (Saving-Define - which has examples related to what is the saving program) (Saving-Join - which has examples related to how to join saving program) (Loyality-Define - 有关于什么是忠诚计划的例子) (Loyality-Join - 有关于如何加入忠诚计划的例子) (Saving-Define - 有关于什么是储蓄计划的例子) (Saving -加入 - 里面有关于如何加入储蓄计划的例子)

so users can start the conversation by utterance like "tell me about the loyalty program".因此用户可以通过“告诉我关于忠诚度计划”之类的话语开始对话。 then they will ask "how to join" (without mentioning the program assuming that the bot is aware).然后他们会询问“如何加入”(假设机器人知道该程序,则不会提及该程序)。 In that case Watson will identify 2 intents (Loyalty-Join, Saving-Join) and Saving-Join intent may have a higher confidence.在这种情况下,Watson 将识别 2 个意图(Loyalty-Join、Saving-Join)并且 Saving-Join 意图可能具有更高的置信度。

so I need to intercept the dialogue (may be be creating a parent node to check the context and based on that will filter-about the wrong intents).所以我需要拦截对话(可能正在创建一个父节点来检查上下文并基于它过滤错误的意图)。

I couldn't find a way to write code in the dialogue to check the context and modify the intents array so I want to ask about the best practice to do that.我找不到在对话中编写代码来检查上下文并修改意图数组的方法,所以我想询问这样做的最佳实践。

You can't edit the intents object, so it makes what you want to do tricky but not impossible.您无法编辑意图对象,因此它会使您想要做的事情变得棘手但并非不可能。

In your answer node, add a context variable like $topic .在您的答案节点中,添加一个上下文变量,如$topic You fill this with a term that will denote the topic.你用一个表示主题的术语来填充它。

Then if the users question is not answered, you can check for the topic context and add that to a new context variable.然后,如果用户的问题没有得到回答,您可以检查主题上下文并将其添加到新的上下文变量中。 This new variable is then picked up by the application layer to re-ask the question.这个新变量随后被应用层选取以重新提出问题。

Example:例子:

User: tell me about the loyalty program
WA-> Found #Loyality-Define
     Set $topic to "loyalty"
     Return answer. 

User: how to join
 WA-> No intent found. 
      $topic is not blank. 
      Set $reask to "$topic !! how to join"
APP-> $reask is set. 
      Ask question "loyalty !! how to join"
      Clear $reask and $topic
 WA-> Found #Loyalty-join
      $topic set to "loyalty"
      Return answer

Now in the last situation, if even with the loaded question it is not found, clearing the $topic stops it looping.现在在最后一种情况下,即使没有找到加载的问题,清除 $topic 也会停止循环。

The other thing to be aware is that if a user changes topic you must either set the topic or clear it.要注意的另一件事是,如果用户更改主题,您必须设置主题或清除它。 To prevent it picking the old topics.以防止它选择旧主题。

NOTE: The question was changed so it is technically a different question.注意:问题已更改,因此在技术上是一个不同的问题。 Leaving previous answer below在下面留下以前的答案


You can use the intents[] object to analyse the returning the results.您可以使用intents[]对象来分析返回的结果。

So you can check the confidence difference between the first intent and second intent.因此,您可以检查第一个意图和第二个意图之间的置信度差异。 If they fall inside a certain range, then you can take action.如果它们落在某个范围内,则您可以采取行动。

Example condition:示例条件:

intents[0] > 0.24 && intents.[1] - intents[0] > 0.05

This checks if two intents are within 5% of each other.这会检查两个意图是否在彼此的 5% 之内。 The threshold of 0.24 is to ignore the second intent as it will likely fall below 0.2 which normally means the intent should not be actioned on. 0.24的阈值是忽略第二个意图,因为它可能会低于0.2 ,这通常意味着不应对该意图采取行动。

You may want to play with this threshold.你可能想玩这个门槛。

Just to explain why you do this.只是为了解释你为什么这样做。 Look at these two charts.看看这两个图表。 The first one it's clear there is only one question asked.第一个很明显只有一个问题。 The second chart shows that the two intents are close together.第二个图表显示这两个意图非常接近。

在此处输入图片说明 在此处输入图片说明


To take actual action, it's best to have a closed folder (condition = false ).要采取实际行动,最好有一个关闭的文件夹(条件 = false )。 In that folder you look for matching intents[1] .在该文件夹中,您查找匹配的intents[1] This will lower the complexity within the dialog.这将降低对话框内的复杂性。


If you want something more complex, you can do k-means at the application layer.如果你想要更复杂的东西,你可以在应用层做 k-means。 Then pass back the second intent at the application layer to have the dialog logic take action.然后在应用层传回第二个意图,让对话逻辑采取行动。 There is an example here .这里有一个例子

Watson Assistant Plus also does this automatically with the Disambiguation feature. Watson Assistant Plus 还使用消歧功能自动执行此操作。

You can train Watson Assistant to respond accordingly.您可以训练 Watson Assistant做出相应的响应。 In the tool where you work on the skill click on the User conversations page in the navigation bar.在您处理技能的工具中,单击导航栏中的“用户对话”页面。 In the message overview you would need to identify those that have been answered incorrectly and then specify the correct intent.在消息概述中,您需要确定那些回答不正确的消息,然后指定正确的意图。 Watson Assistant will pick that up, retrain and then hopefully answer correctly. Watson Assistant 会选择它,重新训练,然后希望能正确回答。

In addition, you could revisit how you define the intents.此外,您可以重新审视如何定义意图。 Are the examples like the real user messages?这些示例是否像真实的用户消息? Could you provide more variations?你能提供更多的变化吗? What are the conflicts that make Watson Assistant pick the one, but not the other intent?是什么冲突让 Watson Assistant 选择了一个意图,而不是另一个意图?

Added:添加:

If you want Watson Assistant to "know" about the context, you could extract the current intent and store it as topic in a context variable.如果您希望 Watson Assistant“了解”上下文,您可以提取当前意图并将其作为主题存储在上下文变量中。 Then, if the "join" intent is detected, switch to the dialog node based on intent "join" and the specific topic.然后,如果检测到“加入”意图,则根据意图“加入”和特定主题切换到对话节点。 For that I would recommend to either have only one intent for "join program" or if really needed, put details about the specifics into the intent.为此,我建议您只对“加入计划”有一个意图,或者如果确实需要,请将有关细节的详细信息放入意图中。 Likely there is not much difference and you end up with just one intent.可能没有太大区别,您最终只有一个意图。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM