简体   繁体   中英

QnA bot only responds when using exact phrasing in Azure and Teams Chat

I created a bot in the Microsoft QnA Maker, and have been testing the QnA pairs in the Maker for a while now. The bot is able to recognize the QnA pairs that I made even if other words are used in the question. However, after introducing the bot to and testing it in both Azure Test Chat and in Teams Chat, I've found that the bot only responds appropriately when using the exact question phrasing.

The bot has a QnA pair where the question is "hello". In QnA Maker Test if I send it variations of this question, such as "hello wicky" or "hello there" then the bot recognizes the pair and responds with the appropriate answer. When testing in Azure and Teams, the bot ONLY responds appropriately if I say "hello". If I say any variations then it just responds with "No good match in FAQ."

My bot has never had problems answering questions that contain other words, but aren't the EXACT question, when testing in QnA Maker. It only seems to be picky when testing in Azure and in Teams. I've republished the bot multiple times, and ensured that the Knowledge Base ID and Subscription Key are correct. What are my options here?

Screenshot of the Azure Test Chat

Here's something to check. If you're using the starter code for Bot Service, you're likely inheriting a Dialog from QnAMakerDialog. The constructor takes several params, including a message to return when there's no good match, and a minimum confidence score for when to return the no good match text. For example, here's the constructor for my class that inherits from QnAMakerDialog:

 public BasicQnAMakerDialog() : base(new QnAMakerService(new QnAMakerAttribute(Utils.GetAppSetting("QnASubscriptionKey"), Utils.GetAppSetting("QnAKnowledgebaseId"), "No good match in FAQ.", 0.5)))
        {}

Notice that the "No good match" text is actually set to "No good match in FAQ", and the minimum confidence score is 0.5.

If you check in your QnAMaker test chat, you should see the confidence score returned along with the message. You can try lowering the minimum confidence score to prevent getting the no good match message.

Improving your bot

You can add alternate phrases to the questions in QnAMaker for your bot, then retrain and publish the model. Wash, rinse, repeat. You may also try logging questions where QnAMaker returns a confidence score below a threshold. That'll give you a starting point for questions that your users are asking where your bot doesn't have a great answer.

Depending on what your knowledge domain is, you might also try searching the web and summarizing the top result, then returning that to the user instead of just "No good match found." That's what Siri does when it comes up dry, which seems like the case with just about everything I ask it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM