<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>IVA - AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</title>
	<atom:link href="https://www.nuecho.com/category/iva/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.nuecho.com/category/iva/</link>
	<description>Nu Echo</description>
	<lastBuildDate>Mon, 20 Sep 2021 15:26:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Question answering experiments with the Dialogflow FAQ Knowledge Connectors</title>
		<link>https://www.nuecho.com/question-answering-experiments-with-the-dialogflow-faq-knowledge-connectors/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=question-answering-experiments-with-the-dialogflow-faq-knowledge-connectors</link>
		
		<dc:creator><![CDATA[Yves Normandin]]></dc:creator>
		<pubDate>Wed, 15 Jan 2020 13:00:57 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVA]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=5478</guid>

					<description><![CDATA[<p>Chatbots come in multiple forms and can serve many different purposes. Without pretending to exhaustivity, we can mention the task-oriented bots, that aim to assist a user in a given set of transactional tasks, like, for example, banking operations the chit-chat bots, whose primary objective is to mimic casual conversation and the question answering bots, [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/question-answering-experiments-with-the-dialogflow-faq-knowledge-connectors/">Question answering experiments with the Dialogflow FAQ Knowledge Connectors</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/question-answering-experiments-with-the-dialogflow-faq-knowledge-connectors/">Question answering experiments with the Dialogflow FAQ Knowledge Connectors</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Chatbots come in multiple forms and can serve many different purposes. Without pretending to exhaustivity, we can mention</p>
<ul>
<li>the <em>task-oriented bots</em>, that aim to assist a user in a given set of transactional tasks, like, for example, banking operations</li>
<li>the <em>chit-chat bots</em>, whose primary objective is to mimic casual conversation</li>
<li>and the <em>question answering bots</em>, whose purpose is to, you guessed it, answer user’s questions.</li>
</ul>
<p>These categories are not mutually exclusive: A task-oriented bot can support some level of small talk, and question answering bots can assist the user in some tasks. These are to be perceived as paradigms more than strict definitions.</p>
<p>In this article, we will focus on the concept of the question answering chatbot, and more specifically on the implementation of this concept in Dialogflow, using <a target="_blank" href="https://cloud.google.com/dialogflow/docs/knowledge-connectors" rel="noopener">Knowledge connectors</a> (still a beta feature at the moment of writing).<span style="font-size: 18px;"> </span><span style="font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif; font-size: 34px;"> </span></p>
<h2>About Dialogflow FAQ knowledge connectors</h2>
<p>Knowledge connectors are meant to complement the intents of an agent and offer a quick and easy way to integrate existing knowledge bases to a chatbot. Dialogflow offers two types of knowledge connectors: FAQ and Knowledge Base Articles. Here we will mostly focus on the FAQ knowledge connector, which models the knowledge bases as a list of question-answer pairs (QA pairs).</p>
<p>Each QA pair in a FAQ knowledge connector can be seen as a special kind of intent that has a single training phrase and a single text response. At first sight, the main advantages of a FAQ knowledge connector over defined intents seem to be the ease of integrating external knowledge bases and the fact that, contrary to defined intents, more than a single response can be returned (which can be convenient for a search mode).</p>
<p>Are there any other advantages? One of our hypotheses when we started this work was that knowledge connectors would be able to leverage the answer in the QA pair when matching the query, not just the question. This is not explicitly mentioned in the documentation, but it would make sense for two reasons. First, it’s hard to believe that any NLU engine can effectively learn from a single training phrase. There are always many ways to ask a question that don’t look at all like the training phrase. Second, FAQ data sources often have long answers that could conceivably be correct answers to a wide range of questions other than the one provided. When trying to find the correct answer to a user query, it would therefore make sense for the engine to focus as much on finding the answer that best answers the query as on finding the question that best matches the query<span>.</span><span style="font-size: 18px;"> </span></p>
<h2>Anatomy of the Knowledge Base</h2>
<p>The knowledge base we used was taken from the Frequently Asked Questions (FAQ) section of the website of a North American airport. It contains more than a hundred QA pairs, separated in a dozen categories. Each category contains a number of subcategories ranging from only one to about ten.</p>
<p>While some questions have straightforward answers, others have complex, multi-paragraphs ones. All the answers are primarily composed of text, but many also contain tables, images, and some even contain videos. Many answers also have hyperlinks leading to other parts of the FAQ or external pages.<span style="font-size: 18px;"> </span></p>
<h2>Minor surgery on the Knowledge Base</h2>
<p>While analyzing the knowledge base, we found that several questions only made sense within the context of the category and sub-category in which they appear. For instance, in the Parking section, we have the question “<em>How do I reserve online?</em>”. The FAQ context makes it clear that this is a question about parking reservation, but this information is lost when modeling the knowledge base as a CSV-formatted list of question-answer pairs (QA pairs). We therefore had to modify several of the original questions so that they could be understood without the help of any context. So, in the example above, the question was changed to: “<em>How do I reserve a parking space online?</em>”.<span style="font-size: 18px;"> </span></p>
<h2>What questions users ask</h2>
<p>The airport website offers users two distinct ways to type queries to get answers: one that clearly looks like a search bar and another one that looks like a chat widget that pops when clicking a “Support” button on the bottom right of the web page. Both of them do the exact same thing: They perform a search in the knowledge base and return links to the most relevant articles. However, we believe that the chat-like interface entices more complex, natural queries since the users may believe they are entering a chat conversation.</p>
<p>The airport provided us with a set of real user queries collected from the two query interfaces. This is very important because this tells us what questions users are really asking and it provided us with real user data for our experiments.</p>
<p>Of course, we had to do some cleaning on that data set, as a good number of queries were not relevant for our purpose. Things like digit strings (most likely phone numbers and extensions), flight numbers with no other indications, or purely phatic sentences (for example, “<em>how are you?</em>”). We also observed that the queries could be separated into two groups: either they were really short and to the point, with one or two words at most, or they were long and complex, with lots of information, details, and usually formulated as a question.<span style="font-size: 18px;"> </span></p>
<h2>Augmenting the corpus</h2>
<p>Once the data set was cleaned, we ended up with about 300 queries (down from a little more than 1500!). Clearly, this would not be sufficient for our experiments, so we decided to collect additional data that, we hoped, would still be representative of real user queries.</p>
<p>We considered using crowdsourcing solutions (like <a target="_blank" href="https://www.mturk.com" rel="noopener">Amazon Mechanical Turk</a>) but ultimately decided to try other options. Instead, we used the <em>People also ask and Related searches</em> functionalities of Google Search to glean additional user data. We would start with a user query (real or fabricated) and collect the related questions proposed by Google. One interesting feature of the <em>People also ask</em> functionality is that every time we expand one of the choices, it proposes several additional related questions. This way, we ended up collecting about 300 additional queries with little to no effort, effectively doubling the number of queries we had.</p>
<p>At the same time, we also organized an internal data collection at Nu Echo, where our colleagues would have to write plausible user queries based on general categories that we assigned to them. This gave us over 400 hundred additional queries, bringing our total to about a thousand.<span style="font-size: 18px;"> </span></p>
<h2>Annotating the corpus</h2>
<p>Annotating the corpus consists in manually determining which QA pair in the knowledge base, if any, correctly answers each of the queries in the corpus. While this sounds simple, it proved to be a surprisingly difficult task. Indeed, the human annotator has to carefully analyze each potential answer before deciding whether or not it’s a correct response to the query. For some queries, there was no correct answer, but there were one or several QA pairs that provided relevant answers.</p>
<p>What we ended up doing was separate the corpus in 3 categories:</p>
<ol>
<li>Queries with a correct answer (<em>an exact match</em>);</li>
<li>Queries without an exact match but with one or several relevant answers (<em>relevant matches</em>);</li>
<li>Queries without any match at all.</li>
</ol>
<p>Queries in the second category would be labeled with all relevant QA pairs. When we finished annotating, only 33% of the queries had an exact match, even if 91% of the corpus can be considered “in-domain”. An interesting observation is that the FAQ coverage varied significantly based on the source of the queries, as shown in the table below.</p>
<table style="height: 165px;" width="379px">
<tbody>
<tr>
<td><b>Source</b></td>
<td><b>Count</b></td>
<td><b>Exact match</b></td>
<td><b>Coverage</b></td>
</tr>
<tr>
<td>Google</td>
<td>275</td>
<td>133</td>
<td>48.36%</td>
</tr>
<tr>
<td>Website queries</td>
<td>303</td>
<td>63</td>
<td>20.79%</td>
</tr>
<tr>
<td>Nu Echo</td>
<td>440</td>
<td>150</td>
<td>34.09%</td>
</tr>
<tr>
<td><b>Total</b></td>
<td><b>1018</b></td>
<td><b>346</b></td>
<td><b>33.99%</b></td>
</tr>
</tbody>
</table>
<p>Our explanation is that the Google queries tended to be simpler and more representative of real user queries, the website queries were often out-of-domain, incomplete or ambiguous. The Nu Echo queries tended to be overly “creative” and generally less realistic.<span style="font-size: 18px;"> </span></p>
<h2>Train and test set</h2>
<p>We split our corpus into a train set and a test set. The queries in the train set are used to improve accuracy while the test set is used to measure accuracy. Note that this is a very small test set. It contains 407 queries, of which only 151 have an exact match (37%). It is also very skewed: The top 10% most frequent FAQ pairs account for 61% of those 151 queries.<span style="font-size: 18px;"> </span></p>
<h2>Performance metrics</h2>
<p>To measure performance, we need to decide which performance metrics to use. We opted for <a target="_blank" href="https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall" rel="noopener">precision and recall</a> as our main metrics. They are defined as follows:</p>
<ul>
<li><em>Precision</em>: of all the predictions returned by Dialogflow, how many of them are actually correct?</li>
<li><em>Recall</em>: of all the actual responses we’d like to get, how many of them were actually predicted by Dialogflow?</li>
</ul>
<p>In our case, we considered only exact matches and the top prediction returned by Dialogflow. One reason for this is that relevant matches are fairly subjective and we have found that the agreement between annotators tends to be low. Another reason is that this makes comparison with other techniques (e.g., using defined intents) easier since these techniques may only return one prediction.</p>
<p>Since Dialogflow returns a confidence score that ranges from 0 to 1 for each prediction it makes, we can control the precision-recall tradeoff by changing the confidence threshold. For example:</p>
<ul>
<li>when the threshold is at 0, we accept all predictions, and the recall is at its highest, while the precision is usually at its lowest;</li>
<li>when the threshold is at 1, we exclude almost all predictions, so the recall will be at its lowest, but the precision usually is the highest.</li>
</ul>
<p>When shown graphically, this provides a very useful visualization that makes it easy to quickly evaluate the performance of an agent against a given set of queries, or to compare agents (see results below).</p>
<p>We’re now ready to delve into some of the experiments we performed. Note that the data that has been used to perform these experiments are publicly available in a <a target="_blank" href="https://github.com/nuecho/experiment-data/tree/master/2019-12-13_dialogflow-faq-knowledge-connectors" rel="noopener">Nu Echo GitHub repository.</a><span style="font-size: 18px;"> </span></p>
<h2>Experiments with the FAQ Knowledge connector</h2>
<p>We took all of the QA pairs we extracted from the airport knowledge base and pushed those to a Dialogflow Knowledge Base FAQ connector. Then we trained an agent and tested this agent with the queries in the test set. Here’s the result.</p>
<p><img decoding="async" class="wp-image-5483 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2019/12/1.jpg" alt="" width="870" height="493" /></p>
<p>Ouch! This curve shows, at best, a recall of barely 40%. And that’s with less than 30% precision. Something is definitely wrong here. A first analysis of the results reveals something very interesting: The question in the QA pair that correctly answers the user query is often very different from the query. For instance, the correct answer to the query “<em>Can I bring milk with me on the plane for the baby</em>?” is actually found in the QA pair with the following question: “<em>What are the procedures at the security checkpoint when traveling with children?</em>”. In other words, those two formulations are too far apart for any NLU engine to make the connection. In order to identify the correct QA pair, one really has to analyze the answer in order to determine whether it answers the query.</p>
<p>Unfortunately, Dialogflow seems to mostly rely on the question in the QA pair when predicting the best QA pairs and that creates an issue: The more information there is in a FAQ answer, the more difficult it is to reduce it to a single question.<span style="font-size: 18px;"> </span></p>
<h2>What if QA pairs could have multiple questions?</h2>
<p>Contrary to defined intents, Dialogflow FAQ knowledge connectors are limited to a single question per QA pair. While this makes sense if the goal is to use existing FAQ knowledge bases “as is”, it may limit the achievable question answering performance. But what if we work around that restriction by including multiple copies of the same QA pair, but using different question formulations (different questions, same answer)? This could allow us to capture different formulations of the same question, as well as entirely different questions for which the answer is correct.</p>
<p>Here is how we did it:</p>
<ul>
<li>We selected the top 10 most frequent QA pairs in the corpus. For each of them, we created several new QA pairs containing the same answer, but a different question (using questions from the train set). We called this the expanded FAQ set.</li>
<li>We created a new agent trained with this expanded set of QA pairs.</li>
<li>We tested this new agent on the test set.</li>
</ul>
<p>The graph below compares the performance of this new agent with the original one. There is a definite improvement in recall, but precision still remains very low.</p>
<p><img decoding="async" class="wp-image-5485 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2019/12/2.jpg" alt="" width="887" height="533" /></p>
<h2>FAQ vs Intents</h2>
<p>How do defined intents compare with Knowledge Base FAQ? To find out, we created an agent with one intent per FAQ pair. For each intent, the set of training phrases included the original question in the QA pair, plus all the queries in the train set labelled with that QA pair as an exact match. Then we tested this new agent on the test set. The graph below compares this new result with the previous two results.</p>
<p><img decoding="async" class="wp-image-5487 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2019/12/3.jpg" alt="" width="897" height="522" /></p>
<p>That is an amazing jump in performance. Granted, these are not great results, but at least we know we are heading in the right direction and that performance could still be improved a lot.<span style="font-size: 18px;"> </span></p>
<h2>A quick look at Knowledge Base Articles</h2>
<p>As mentioned before, Dialogflow offers two types of knowledge connectors: FAQ and Knowledge Base Articles. Knowledge Base Articles are based on the technologies used by Google Search, which look for answers to questions by reading and understanding entire documents and extracting a portion of a document that contains the answer to the question. This is often referred to as <a target="_blank" href="https://ai.googleblog.com/2019/01/natural-questions-new-corpus-and.html" rel="noopener">open-domain question answering</a>.</p>
<p>We wanted to see how this would perform on our FAQ knowledge base. To get the best possible results, we reviewed and edited the FAQ answers to make sure we followed the <a target="_blank" href="https://cloud.google.com/dialogflow/docs/knowledge-connectors#supported-content" rel="noopener">best practices</a> recommended by Google. This includes avoiding single-sentence paragraphs, converting tables and lists into well-formed sentences, and removing extraneous content. We also made sure that each answer was completely self-contained and could be understood without knowing its FAQ category and sub-category. Finally, whenever necessary, we added text to make it clear what question was being answered. The edited FAQ answers are provided in the <a target="_blank" href="https://github.com/nuecho/experiment-data/tree/master/2019-12-13_dialogflow-faq-knowledge-connectors" rel="nofollow noopener noreferrer">Nu Echo GitHub repository</a>.</p>
<p>The result is shown below (green curve, bottom left). What this shows is that Knowledge Base Articles just doesn’t work for that particular knowledge base. The question is: why?</p>
<p><img decoding="async" class="wp-image-5489 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2019/12/4.jpg" alt="" width="874" height="543" /></p>
<p>Although further investigation is required, a quick analysis immediately revealed one issue: Some frequent QA pairs don’t actually contain the answer to the user query, but instead provide a link to a document containing the desired information. This may explain why, in those cases, the Article Knowledge Connector couldn’t match the answer to the query.<span style="font-size: 18px;"> </span></p>
<h2>Conclusion</h2>
<p>We wanted to see whether it was possible to achieve good question answering performance by relying solely on Dialogflow Knowledge Connector with existing FAQ knowledge bases. The answer is most likely “no”. Why? There are a number of reasons:</p>
<ul>
<li>While defined intents can have as many training phrases as we want, FAQ knowledge bases are limited to a single question per QA pair. This turns out to be a significant problem since it is difficult to effectively generalize from a single example. That’s especially true for QA pairs with long answers, which can correctly answer a wide range of very different questions.</li>
<li>FAQ knowledge bases are often not representative of real user queries and, therefore, their coverage tends to be low. Moreover, they often need a lot of manual cleanup, which means that we cannot assume that the system will be able to automatically take advantage of an updated FAQ knowledge base.</li>
<li>Many user queries require a structured representation of the query (i.e., with both intents and entities) and a structured knowledge base to be able to produce the required answer. For instance, to answer the question “<em>Are there any restaurants serving vegan meals near gate 79?</em>”, we need a knowledge base containing all restaurants, their location, and the foods they serve, as well as an algorithm capable of calculating a distance between two locations.</li>
<li>Many real frequent user queries require access to back-end transactional systems (e.g., “<em>What is the arrival time of flight UA789?</em>”). Again, this cannot be implemented with a static FAQ knowledge base.</li>
</ul>
<p>The approach we recommend for building a question answering system with Dialogflow is consistent with what Google actually recommends, that is, use Knowledge Connectors to complement defined intents. More specifically, use the power of defined intents, leveraging entities and lots of training phrases, to achieve a high success level on the really frequent questions (the short tail).</p>
<p>Then, for those long tail questions that cannot be answered this way, use knowledge connectors with whatever knowledge bases are available to propose possible answers that the user will hopefully find relevant.</p>
<p><span style="font-weight: 400;">Thanks to Guillaume Voisine and Mathieu Bergeron for doing much of the experimental work and for their invaluable help writing this blog.</span></p>
<div class='et-box et-shadow'>
					<div class='et-box-content'><h2 style="text-align: center;"><span style="color: #333333;">Conversational automation initiatives</span></h2>
<div class="et_slidecontent et_shortcode_slide_active" style="display: block; text-align: center;"><span style="color: #333333;">Take our <a target="_blank" href="https://fr.surveymonkey.com/r/TXVZFK5" rel="noopener">survey</a> on innovative technologies on the customer experience, and find out the results from the dataset. (All data collected will remain anonymous.)</span></div></div></div><p>The post <a href="https://www.nuecho.com/question-answering-experiments-with-the-dialogflow-faq-knowledge-connectors/">Question answering experiments with the Dialogflow FAQ Knowledge Connectors</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/question-answering-experiments-with-the-dialogflow-faq-knowledge-connectors/">Question answering experiments with the Dialogflow FAQ Knowledge Connectors</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</title>
		<link>https://www.nuecho.com/chatbots-voicebots-iva-ivr/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=chatbots-voicebots-iva-ivr</link>
		
		<dc:creator><![CDATA[Yves Normandin]]></dc:creator>
		<pubDate>Wed, 11 Dec 2019 16:21:25 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVA]]></category>
		<category><![CDATA[IVR]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=5577</guid>

					<description><![CDATA[<p>​In the past few years, we have witnessed the introduction of a bunch of new terms and expressions related to conversational systems and interfaces: chatbots, voicebots, intelligent virtual agents (IVAs), intelligent virtual assistants (IVAs), etc. Unfortunately, all of these tend to mean different things to different people, which ends up generating a lot of confusion [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/chatbots-voicebots-iva-ivr/">Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/chatbots-voicebots-iva-ivr/">Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>​In the past few years, we have witnessed the introduction of a bunch of new terms and expressions related to conversational systems and interfaces: chatbots, voicebots, intelligent virtual agents (IVAs), intelligent virtual assistants (IVAs), etc. Unfortunately, all of these tend to mean different things to different people, which ends up generating a lot of confusion in the industry.</p>
<p>In an attempt to, if not eliminate, at least reduce some of that confusion, I’ll propose some broad definitions for these terms.</p>
<p>A <strong>chatbot</strong> is an automated system with which users interact through a “chat-like” interface. This includes messaging channels such as Messenger, WhatsApp, Slack&#8230; but it also includes SMS, iMessage, as well as other chat-like interfaces such as web chats, chat widgets in mobile applications, etc. Although chatbot interactions should primarily be done through text input and output, they in practice increasingly incorporate rich media (depending on what the channel supports) such as buttons, images, carousels, webviews, etc. In reality, many chatbots have little or no support for text input, relying primarily on buttons for user input. A chatbot is not necessarily conversational (see <a target="_blank" href="https://www.nuecho.com/news-events/what-do-you-mean-conversational-ivr/" rel="noopener">here</a> for an explanation of what we mean by conversational) and in fact most chatbots are highly directed, menu driven “dialogs”.</p>
<p>A <strong>voicebot</strong> is a chatbot with which users can interact vocally. This assumes that the chatbot behind the voicebot can handle natural language input and it requires a capability to convert voice input into text (or directly into intents), as well as text output into voice. Example voicebots include any bots accessible through a voice channel, which include the now ubiquitous smart home speakers, but also the plain old telephone channel as well as any VoIP channel, for instance the call channels of Skype, Messenger, WhatsApp, Slack, etc. In that sense, a conversational IVR could be seen as a voicebot. Another example would be a Dialogflow voicebot, accessible through any voice channel, that takes advantage of Dialogflow’s ability to <a target="_blank" href="https://cloud.google.com/dialogflow-enterprise/docs/detect-intent-audio" rel="noopener">detect intent from audio</a>.</p>
<p>An <strong>Intelligent Virtual Agent (IVA)</strong> is a robot that simulates an agent (which, in this context, really means a contact center agent). It provides some of the services normally provided by a contact center agent through a communication with users – via voice or text channels – that resembles human-to-human communication. For reference, DMG defines an IVA as “<em>A system that utilizes artificial intelligence, machine learning, advanced speech technologies (including NLU/NLP/NLG) to simulate live and unstructured cognitive conversations for voice, text, or digital interactions via a digital persona.” A virtual agent can hence be a chatbot, a voicebot, or both.</em></p>
<p>An <strong>Intelligent Virtual Assistant</strong> (also IVA, unfortunately) is a system that is dedicated to helping its user, either by providing useful information or advice (weather or traffic information, financial advice, etc.), by answering questions, or by accomplishing tasks on his/her behalf (e.g., planning meetings, booking hotels, paying bills, whatever). Interaction with an intelligent virtual assistant is often done through text or voice conversational channels, which effectively makes it a chatbot or a voicebot, but it can also be done through mobile or web applications.</p>
<p>An<strong> IVR (Interactive Voice Response)</strong> is an interactive telephone system that is primarily used in a call center to steer calls to the appropriate agent, and possibly to enable callers to perform some self-service transactions. Most IVR systems today are anything but conversational, relying instead primarily on menu navigation through DTMF (touch-tone) user inputs. Several IVR systems also enable speech input, but most of these only support voice menus and directed dialogs. More recently, natural language call steering applications, which enable callers to state the purpose of their call in their own words, have gained in popularity, but that remains a very small minority of IVR systems out there. The surge in popularity of conversational systems, however, is inevitably now impacting IVR, so expect to see a rapidly increasing number of<strong> IVR voicebots</strong> being deployed in the near future.</p><p>The post <a href="https://www.nuecho.com/chatbots-voicebots-iva-ivr/">Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/chatbots-voicebots-iva-ivr/">Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Does your FAQ stand for Fail to Answer Questions?</title>
		<link>https://www.nuecho.com/faq-chatbot-answering/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=faq-chatbot-answering</link>
		
		<dc:creator><![CDATA[Yves Normandin]]></dc:creator>
		<pubDate>Mon, 25 Nov 2019 22:00:55 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVA]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=5399</guid>

					<description><![CDATA[<p>From FAQs to chatbots: Improve customer experience with conversational question answering. A significant portion of customer service inquiries is about users wanting an answer to a question. Organizations are rightfully motivated to provide efficient means for users to find answers to their questions autonomously (i.e, without interacting with a human agent) since it can improve [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/faq-chatbot-answering/">Does your FAQ stand for Fail to Answer Questions?</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/faq-chatbot-answering/">Does your FAQ stand for Fail to Answer Questions?</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h3>From FAQs to chatbots: Improve customer experience with conversational question answering.</h3>
<p>A significant portion of customer service inquiries is about users wanting an answer to a question. Organizations are rightfully motivated to provide efficient means for users to find answers to their questions autonomously (i.e, without interacting with a human agent) since it can improve user experience while greatly reducing costs by freeing up valuable time for their customer service agents.</p>
<p>To achieve this, organizations traditionally propose a Frequently Asked Questions (FAQ) section on their website and they also often provide a search capability that can return relevant articles from a knowledge base, the website, or both. In many cases, these can provide fairly effective means for users to get the information they’re looking for, therefore reducing pressure on the contact center.</p>
<p>In that context, how can a question-answering customer service chatbot add value? Certainly, that cannot be by providing a chat-like interface to a static FAQ or to an existing website search capability. That just wouldn’t be very compelling (for a discussion on this topic, see Tobias Goebel’s great blog post explaining why <a target="_blank" href="https://chatbotsmagazine.com/why-you-cant-just-convert-faqs-into-a-chatbot-1-1-92205141d008" rel="noopener">we can’t just convert FAQs into a chatbot 1:1</a>).</p>
<h2>Chatbot question answering: beyond static FAQs and search</h2>
<p>In order to really provide question answering value, a customer service chatbot has to go beyond the FAQ capabilities already provided on the website. This can be achieved in a number of ways, including by:</p>
<ol>
<li>Directly answering user’s questions rather than providing links to relevant documents. If I ask “Are strollers allowed on airplanes?” I’d like to have a clear response (“Yes, strollers are allowed.”) rather than list of articles that may or may not answer my question.</li>
<li>Truly leveraging a conversational interface, for instance by enabling the chatbot to clarify vague questions:</li>
</ol>
<p style="padding-left: 60px;">User: I’m looking for a telephone number<br />
Chatbot: Who would you like to call?<br />
User: Lost items<br />
Chatbot: The lost-and-found telephone number is 123-456-7890</p>
<p style="padding-left: 30px;">Or by enabling users to ask follow-on questions:</p>
<p style="padding-left: 60px;">User: Can I bring breast milk on a plane?<br />
Chatbot: Yes, breast milk is allowed on airplanes.<br />
User: What about strollers?<br />
Chatbot: Strollers are also allowed.</p>
<ol start="3">
<li>Providing dynamic and/or personalized answers, which require access to back-end systems. For instance:</li>
</ol>
<p style="padding-left: 60px;">What is the arrival time for flight United 285?<br />
When should I expect to receive my luggage?</p>
<ol start="4">
<li>Enabling question answering at any time during the course of a chatbot conversation.</li>
<li>Giving users the ability to continue the conversation with a human agent, if the chatbot isn’t able to solve the user’s issue.</li>
</ol>
<p>In a chatbot, the very frequent queries (the short tail) can &#8211; and should &#8211; be handled using standard approaches (e.g., with intents and entities). While that requires work to maintain the chatbot to handle those new frequent queries that will inevitably occur, it’s the approach that will provide the best results.</p>
<p>Meanwhile, however, there will always be all those long tail queries that would just require too much effort to try to support that way. So when the chatbot doesn’t have the answer to a question, it is best to fall back to a search-like mode that can automatically leverage all those documents and knowledge bases that you already have. They most likely contain answers to many of these questions. This not only reduces development effort, but it makes it much easier to keep the system up to date with the latest answers.</p>
<h2>Search-like capabilities in conversational platforms</h2>
<p>Some conversational platforms provide search-like capabilities that make it possible to automatically leverage existing knowledge bases or documents to search for answers to those user queries that the chatbot cannot answer. For instance:</p>
<ul>
<li>Chatbots developed with <a target="_blank" href="https://www.ibm.com/cloud/watson-assistant/" rel="noopener">Watson Assistant</a> can leverage <a target="_blank" href="https://www.ibm.com/cloud/watson-discovery" rel="noopener">Watson Discovery</a> for that purpose. Performance can be improved by using <a target="_blank" href="https://www.ibm.com/watson/services/knowledge-studio/" rel="noopener">Watson Knowledge Studio</a> to teach Watson about the language and relationships that are useful in order to understand your specific domain or industry.</li>
<li>Chatbots developed with Google <a target="_blank" href="https://dialogflow.com/" rel="noopener">Dialogflow</a> can leverage Dialogflow’s <a target="_blank" href="https://cloud.google.com/dialogflow/docs/knowledge-connectors" rel="noopener">Knowledge Connectors</a> to search knowledge bases for a response to a user query. Knowledge connectors are offered in two varieties: FAQs and knowledge base articles. FAQs are used to integrate existing Frequently Asked Questions (e.g., from a website). In that case, finding a response means finding the FAQ question-answer pairs (QA pairs) that best match the user query. With knowledge based articles, Dialogflow actually looks for the answer to user queries within the articles and returns the most relevant portion of the article as answer.</li>
</ul>
<p>In future blog posts, we will report on experiments with some of these platforms. Stay tuned.</p><p>The post <a href="https://www.nuecho.com/faq-chatbot-answering/">Does your FAQ stand for Fail to Answer Questions?</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/faq-chatbot-answering/">Does your FAQ stand for Fail to Answer Questions?</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Nexmo Audio Streaming Oumph: Call Recording Server for the Masses</title>
		<link>https://www.nuecho.com/nexmo-audio-streaming-oumph-call-recording-server-for-the-masses/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=nexmo-audio-streaming-oumph-call-recording-server-for-the-masses</link>
		
		<dc:creator><![CDATA[Jeremy Parent]]></dc:creator>
		<pubDate>Mon, 20 May 2019 21:58:51 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVA]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3467</guid>

					<description><![CDATA[<p>Nexmo Voice Application Creating a Nexmo application is not complicated, as there is not much to configure: you have to provide a phone number and two webhook URLs: one to return a Nexmo Call Control Object (NCCO) containing instructions and one to receive event data. Similarly, using your Nexmo app is simple: A call comes [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/nexmo-audio-streaming-oumph-call-recording-server-for-the-masses/">Nexmo Audio Streaming Oumph: Call Recording Server for the Masses</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/nexmo-audio-streaming-oumph-call-recording-server-for-the-masses/">Nexmo Audio Streaming Oumph: Call Recording Server for the Masses</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><span style="font-weight: 400;">Nexmo Voice Application</span></h2>
<p><span style="font-weight: 400;">Creating a </span><a target="_blank" href="https://developer.nexmo.com/concepts/guides/applications" rel="noopener"><span style="font-weight: 400;">Nexmo application </span></a><span style="font-weight: 400;">is not complicated, as there is not much to configure: you have to provide a phone number and two webhook URLs: one to return a </span><a target="_blank" href="https://developer.nexmo.com/voice/voice-api/ncco-reference" rel="noopener"><span style="font-weight: 400;">Nexmo Call Control Object (NCCO)</span></a><span style="font-weight: 400;"> containing instructions and one to receive event data. Similarly, using your Nexmo app is simple:</span></p>
<ol>
<li style="font-weight: 400;"><span style="font-weight: 400;">A call comes in through your application.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">The application asks for instructions at the provided web address.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">The application executes the instructions.</span></li>
</ol>
<p><span style="font-weight: 400;">This last step is when the call actually starts. Throughout this process, the Nexmo application sends event messages to the webhook containing information on the current state of the call.</span></p>
<p><span style="font-weight: 400;">To implement the new websocket feature only means giving the </span><a target="_blank" href="https://developer.nexmo.com/voice/voice-api/guides/websockets" rel="noopener"><span style="font-weight: 400;">NCCO instruction</span></a><span style="font-weight: 400;"> to connect via websocket with your web server’s address.</span></p>
<p><img decoding="async" class="aligncenter wp-image-3472 size-full" src="https://www.nuecho.com/wp-content/uploads/2019/05/Recording-Server-Infrastructure.jpg" alt="" width="948" height="394" srcset="https://www.nuecho.com/wp-content/uploads/2019/05/Recording-Server-Infrastructure.jpg 948w, https://www.nuecho.com/wp-content/uploads/2019/05/Recording-Server-Infrastructure-300x125.jpg 300w, https://www.nuecho.com/wp-content/uploads/2019/05/Recording-Server-Infrastructure-768x319.jpg 768w" sizes="(max-width: 948px) 100vw, 948px" /></p>
<h2><span style="font-weight: 400;">Providing Instructions with Google Cloud Functions (GCF)</span></h2>
<p><a target="_blank" href="https://cloud.google.com/functions/" rel="noopener"><span style="font-weight: 400;">GCF</span></a><span style="font-weight: 400;"> is a platform on which we can deploy a simple </span><a target="_blank" href="https://expressjs.com/" rel="noopener"><span style="font-weight: 400;">Express</span></a><span style="font-weight: 400;"> application to be used as a function which is called via HTTP. Its resources are small and its runtime mustn’t be long. We can easily answer Nexmo’s request with the call instructions thanks to GCF. On this platform, we can deploy a simple Node.js Express app with the sole purpose of providing an instruction set by returning a JSON object to each request. Two instructions fulfill our needs:</span></p>
<ol>
<li style="font-weight: 400;"><span style="font-weight: 400;">Instruct the caller to leave a message.</span></li>
<li style="font-weight: 400;"><span style="font-weight: 400;">Connect the call to the specified web address via a </span><a target="_blank" href="https://developer.nexmo.com/voice/voice-api/guides/websockets" rel="noopener"><span style="font-weight: 400;">websocket</span></a><span style="font-weight: 400;">.</span></li>
</ol>
<p><span style="font-weight: 400;">To instruct Nexmo on how to proceed, we must answer its request with a JSON object listing all the steps. The object must be in </span><a target="_blank" href="https://developer.nexmo.com/voice/voice-api/ncco-reference" rel="noopener"><span style="font-weight: 400;">NCCO</span></a><span style="font-weight: 400;"> format. In our example, this gives:</span></p>
<p><img decoding="async" class="aligncenter wp-image-3601 size-full" src="https://www.nuecho.com/wp-content/uploads/2019/05/1.png" alt="" width="998" height="399" srcset="https://www.nuecho.com/wp-content/uploads/2019/05/1.png 998w, https://www.nuecho.com/wp-content/uploads/2019/05/1-300x120.png 300w, https://www.nuecho.com/wp-content/uploads/2019/05/1-768x307.png 768w" sizes="(max-width: 998px) 100vw, 998px" /></p>
<p><span style="font-weight: 400;">The field </span><i><span style="font-weight: 400;">content-type </span></i><span style="font-weight: 400;">specifies the audio format to be in linear PCM at 16000 Hz.</span></p>
<h2><span style="font-weight: 400;">Recording Server on Google App Engine (GAE)</span></h2>
<p><span style="font-weight: 400;">Now for the more challenging part. Since Google Cloud Functions have a short execution time limit (around 5 seconds), it can’t be used to hold a connection for the length of a whole conversation. This is where </span><a target="_blank" href="https://cloud.google.com/appengine/" rel="noopener"><span style="font-weight: 400;">GAE</span></a><span style="font-weight: 400;"> comes in! Applications can be deployed on this platform with the possibility of holding a connection for longer periods of time.</span></p>
<p><span style="font-weight: 400;">To create a websocket application in Node.js, one could rely on librairies like </span><a target="_blank" href="https://socket.io/" rel="noopener"><span style="font-weight: 400;">Socket.IO</span></a><span style="font-weight: 400;"> or, as we did, use a combination of packages consisting of </span><a target="_blank" href="https://github.com/websockets/ws" rel="noopener"><span style="font-weight: 400;">ws</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://github.com/HenningM/express-ws" rel="noopener"><span style="font-weight: 400;">express-ws</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://github.com/maxogden/websocket-stream#readme" rel="noopener"><span style="font-weight: 400;">websocket-stream</span></a><span style="font-weight: 400;">. With these, for every connection, a </span><a target="_blank" href="https://nodejs.org/api/stream.html#stream_class_stream_duplex" rel="noopener"><span style="font-weight: 400;">duplex stream</span></a><span style="font-weight: 400;"> object is created and can be piped anywhere. Our next step is to create an audio file from it.</span></p>
<p><span style="font-weight: 400;">As specified in the NCCO, the audio data is in PCM format with a sample rate of 16000 Hz. This is valid WAV audio. Using the </span><a target="_blank" href="https://github.com/TooTallNate/node-wav#readme" rel="noopener"><span style="font-weight: 400;">wav</span></a><span style="font-weight: 400;"> package, we can first wrap the PCM stream into a WAV container and then pipe the stream into a file writer, specify the audio format, define the output path and the job is done!</span></p>
<p><img decoding="async" class="aligncenter wp-image-3603 size-full" src="https://www.nuecho.com/wp-content/uploads/2019/05/2.png" alt="" width="803" height="811" srcset="https://www.nuecho.com/wp-content/uploads/2019/05/2.png 803w, https://www.nuecho.com/wp-content/uploads/2019/05/2-150x150.png 150w, https://www.nuecho.com/wp-content/uploads/2019/05/2-297x300.png 297w, https://www.nuecho.com/wp-content/uploads/2019/05/2-768x776.png 768w" sizes="(max-width: 803px) 100vw, 803px" /></p>
<h2><span style="font-weight: 400;">Saving files to Google Cloud Storage</span></h2>
<p><span style="font-weight: 400;">In GAE we can only use temporary folders for writing and reading files. This means that another resource is needed in order to store these audio files permanently. Using the Google Cloud Storage API, we are able to store files in the project’s storage space. And guess what? Uploading files via the API is as simple as following the example already laid for us in Google’s </span><a target="_blank" href="https://cloud.google.com/appengine/docs/flexible/nodejs/using-cloud-storage." rel="noopener"><span style="font-weight: 400;">documentation</span></a><span style="font-weight: 400;">.</span></p>
<p><img decoding="async" class="aligncenter wp-image-3605 size-full" src="https://www.nuecho.com/wp-content/uploads/2019/05/3.png" alt="" width="989" height="712" srcset="https://www.nuecho.com/wp-content/uploads/2019/05/3.png 989w, https://www.nuecho.com/wp-content/uploads/2019/05/3-300x216.png 300w, https://www.nuecho.com/wp-content/uploads/2019/05/3-768x553.png 768w" sizes="(max-width: 989px) 100vw, 989px" /></p>
<p><span style="font-weight: 400;">With this, we have our full recording server. A simple call will end up creating audio recordings in a Google Cloud Storage bucket in WAV format. Having successfully come up with a solution to stream audio from a live call to a web server, we can now think of any number of possible applications such as voicebots, transcribing caller speech in real-time, sentiment analysis or even passive voice biometrics. These can even be used in combination with the recording server. Voice interfaces such as this may be used everywhere in the future. The next step for this technology is to be used as one of the easiest ways of processing audio from a phone call into any kind of applications.</span></p><p>The post <a href="https://www.nuecho.com/nexmo-audio-streaming-oumph-call-recording-server-for-the-masses/">Nexmo Audio Streaming Oumph: Call Recording Server for the Masses</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/nexmo-audio-streaming-oumph-call-recording-server-for-the-masses/">Nexmo Audio Streaming Oumph: Call Recording Server for the Masses</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Dialogflow Distilled: On Preemptive Slot-filling versus Branching</title>
		<link>https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=dialogflow-distilled-on-preemptive-slot-filling-versus-branching</link>
		
		<dc:creator><![CDATA[Pascal Deschênes]]></dc:creator>
		<pubDate>Thu, 02 May 2019 18:11:16 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Dialogflow]]></category>
		<category><![CDATA[IVA]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3005</guid>

					<description><![CDATA[<p>As we gain experience with Google Dialogflow, we like to take a step back and identify usage patterns to feed in our development practices. This blog post aims at depicting and distilling one of such patterns: preemptive slot-filling versus branching. Let’s say that one of your chatbot requirements is to perform a banking payment to [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/">Dialogflow Distilled: On Preemptive Slot-filling versus Branching</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/">Dialogflow Distilled: On Preemptive Slot-filling versus Branching</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As we gain experience with Google Dialogflow, we like to take a step back and identify usage patterns to feed in our development practices. This blog post aims at depicting and distilling one of such patterns: preemptive slot-filling versus branching.</p>
<p>Let’s say that one of your chatbot requirements is to perform a banking payment to a specific merchant. This is a fairly standard transaction involving a single intent capturing a few slots such as amount, account, merchant and a date.</p>
<p>&gt; <a href="https://medium.com/cxinnovations/dialogflow-distilled-on-preemptive-slot-filling-versus-branching-9b662eeed027" target="_blank" rel="noopener noreferrer">Read full version on Medium</a></p><p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/">Dialogflow Distilled: On Preemptive Slot-filling versus Branching</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/">Dialogflow Distilled: On Preemptive Slot-filling versus Branching</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Conversational UX for chatbots &#8211; part 2</title>
		<link>https://www.nuecho.com/conversational-ux-for-chatbots-part-2/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=conversational-ux-for-chatbots-part-2</link>
		
		<dc:creator><![CDATA[Guillaume Voisine]]></dc:creator>
		<pubDate>Fri, 29 Mar 2019 17:35:50 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVA]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3187</guid>

					<description><![CDATA[<p>An overview of essential discourse patterns, part 2 Counter-proposals A conversation with a chatbot doesn’t have to follow a simple question-answer structure. For example, bots can offer suggestions to the user: this paves the way to even more complex interactions. This means more fluid and natural conversations, but also that more efforts need to be [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots-part-2/">Conversational UX for chatbots – part 2</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots-part-2/">Conversational UX for chatbots &#8211; part 2</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>An overview of essential discourse patterns, part 2</h2>
<h3>Counter-proposals</h3>
<p>A conversation with a chatbot doesn’t have to follow a simple question-answer structure. For example, bots can offer suggestions to the user: this paves the way to even more complex interactions. This means more fluid and natural conversations, but also that more efforts need to be made in the dialogue’s design.</p>
<p>If your bot suggests actions or responses, one thing that should be considered is the potential desire for the user to modify them. This is what we call a counter-proposal. It follows the following structure:</p>
<ul>
<li>Chatbot makes a suggestion</li>
<li>User refuses the suggestions and modifies it</li>
<li>Chatbot acknowledges the correction.</li>
</ul>
<blockquote><p><em>To feel organic, a good counter-proposal should require exactly one interaction from the user, immediately after the bot’s proposal.</em></p></blockquote>
<p>The simple fact that the user is correcting the new value implies that the original proposal is refused. Of course, you could always achieve the same effect with more steps:</p>
<ol>
<li>Chatbot makes a suggestion</li>
<li>User refuses it</li>
<li>Chatbot asks what user wants</li>
<li>User says it</li>
<li>Chatbot acknowledges.</li>
</ol>
<p>But it feels contrived, and only supporting that structure could lead to a scenario where the user would have to repeat the same information twice. This is a cardinal sin in chatbot-land:</p>
<p><img decoding="async" class="wp-image-1820 alignnone size-medium" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-300x284.png" alt="" width="300" height="284" srcset="https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-300x284.png 300w, https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land.png 382w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p>Feels more artificial than intelligent, right? This is why it is not recommended to implement bot suggestions without support for counter-proposals if you want to offer a more natural conversational experience, like so:</p>
<p><img decoding="async" class="wp-image-1822 alignnone size-medium" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-2-300x210.png" alt="" width="300" height="210" srcset="https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-2-300x210.png 300w, https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-2.png 388w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<h3>Contextual constraints for entities</h3>
<p>Out of the box, most conversational frameworks (like <a target="_blank" href="https://dialogflow.com/" rel="noopener">Google Dialogflow</a> or <a target="_blank" href="https://www.ibm.com/watson/" rel="noopener">IBM Watson</a>) offer some ways to control what information can be extracted as entities in the dialogue. Typically, there are two options: use already defined system entities or create custom entities, most of the time by writing a list of possible values, although some platforms also accept regular expressions to delimit what can be extracted as a given entity. Here is an example of Entity declaration in Dialogflow:</p>
<p><img decoding="async" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/03/Dialogflow.png" /></p>
<p>This is all very nice and useful, but it can be quite lacking in terms of flexibility. While the type of an entity is a pertinent information, in a lot of situations, it’s not in itself enough to control the flow of the dialogue. Suppose you want your bot to ask for a date to, let’s say, book a flight. To check if the answer given by the user is actually a date is one thing, but it’s another to verify that the date is valid in the context of the conversation. Here, one constraint would be that the date needs to be in the future. But in another conversation, it could be perfectly valid (or even expected) for the user to give a date in the past (like if he’s asked for his birthdate). The constraints can also be defined by a dependency on another entity. To keep the flight booking example, if the bot asks for a return date, it stands to reason that it must be posterior to the departure date.</p>
<blockquote><p><em>Constraints check on entities is a basic conversational pattern that can be very useful to control the flow of a dialogue.</em></p></blockquote>
<p>Of course, like any conversational concept, constraints checking is not a silver bullet; it is but a piece of a larger puzzle.</p>
<h3>Digressions</h3>
<p>According to the Oxford English Dictionary, a digression is “a temporary departure from the main subject in speech or writing.” This is something we humans do everyday: a quick discursive detour to ask for more information or inject a little by-the-way to an ongoing conversation. Our brain is naturally wired to easily handle this kind of context switching. Most dialogue models for chatbots, sadly, are not.</p>
<p>It’s a shame, really, because digressions should not be considered as an optional feature, but as a cornerstone of dialogue design.</p>
<blockquote><p><em>The ability for a bot to handle multiple concurrent dialogue contexts is fundamental to create a believable conversational virtual agent.</em></p></blockquote>
<p>Without this, chatbots feel very limited, constrained to a specific discursive path from which the user is not really permitted to stray. Of course, digression support is not magic either, and chatbots, especially task-oriented ones, will probably always be restrained, at least to some extent, to a relatively small conversational perimeter. But supporting digression is mostly about empowering the users by giving them more control on the flow and the shape of the dialogue.</p>
<p>There are a lot of interesting use cases for digression. One of them is informational query, where the user needs the bot to give them some crucial details before they can make a decision. This can be coupled with bots proposal (and, possibly, counter proposal)</p>
<p><img decoding="async" class="wp-image-1826 alignnone size-medium" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/03/bots-proposal-293x300.png" alt="" width="293" height="300" srcset="https://www.nuecho.com/wp-content/uploads/2019/03/bots-proposal-293x300.png 293w, https://www.nuecho.com/wp-content/uploads/2019/03/bots-proposal.png 374w" sizes="(max-width: 293px) 100vw, 293px" /></p>
<p>When the user asks how much money he has in his account, he doesn’t really want to change the subject: he just needs more data (in this case, how much money is available in his savings account) in order to make an informed decision. This can be a very useful tool to improve the user-friendliness of a chatbot. Also, we can see from this example that dialogue patterns are not components that are to be integrated in isolation; they can mesh together to provide a more pleasant flow to the conversation.</p>
<p>Hopefully you have gained some knowledge about conversational design and why it matters by reading this article. Thanks to my colleagues <a target="_blank" href="https://medium.com/@linda.thibault" rel="noopener">Linda Thibault</a>, <a target="_blank" href="https://medium.com/@pdeschen" rel="noopener">Pascal Deschênes</a> and Karine Déry for their precious input.</p><p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots-part-2/">Conversational UX for chatbots – part 2</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots-part-2/">Conversational UX for chatbots &#8211; part 2</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Conversational UX for chatbots</title>
		<link>https://www.nuecho.com/conversational-ux-for-chatbots/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=conversational-ux-for-chatbots</link>
		
		<dc:creator><![CDATA[Guillaume Voisine]]></dc:creator>
		<pubDate>Tue, 05 Feb 2019 15:21:08 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVA]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3128</guid>

					<description><![CDATA[<p>An overview of essential discourse patterns, part 1 Here at Nu Echo, we’ve been involved in the conversational space for quite some time now. One of the things we learned is that while creating a simple chatbot may take a few days (or even just a few minutes), creating one that istruly conversational requires a [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots/">Conversational UX for chatbots</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots/">Conversational UX for chatbots</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2 >An overview of essential discourse patterns, part 1</h2>
<p>Here at Nu Echo, we’ve been involved in the conversational space for quite some time now. One of the things we learned is that while creating a simple chatbot may take a few days (or even just a few minutes), <a class="markup--anchor markup--p-anchor external" href="https://medium.com/cxinnovations/building-a-truly-conversational-chatbot-takes-more-than-30-minutes-e210412a49a1" target="_blank" rel="noopener noreferrer" data-href="https://medium.com/cxinnovations/building-a-truly-conversational-chatbot-takes-more-than-30-minutes-e210412a49a1">creating one that is<em class="markup--em markup--p-em">truly</em> conversational requires a lot more time</a> and expertise.</p>
<p id="6fec" class="graf graf--p graf-after--p">The purpose of this article is to present a list of the most important discourse patterns required to build what we consider a good conversational chatbot. This list is not exhaustive, but even then, it was quite long, so we decided to split it in multiple parts. This one will focus primarily on error handling and error messages.</p>
<p>Please note that we will only talk about task-oriented chatbots (also called <em class="markup--em markup--p-em">transactional chatbots</em>), i.e. bots that are designed to accomplish a task or a set of tasks, as typically opposed to chit-chat bots, whose primary objective is to maintain an organic conversation as long as possible. That second type of chatbot presents <a class="markup--anchor markup--p-anchor external" href="https://medium.com/r/?url=https%3A%2F%2Fonlim.com%2Fen%2Fchit-chat-chatbots-and-how-to-make-them-better%2F" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fonlim.com%2Fen%2Fchit-chat-chatbots-and-how-to-make-them-better%2F">its own set of very interesting challenges</a>, but it will not be the subject of this series of articles. We also won’t talk about implementation, as it can greatly differ depending on the technology that is used for development.</p>
<h4>Contextual and progressive error handling</h4>
<p>Have you ever tried to interact with a bot, only to hit a conversational wall?</p>
<p>&gt; <a class="external" href="https://medium.com/cxinnovations/conversational-ux-for-chatbots-ca8cc8e08ea" target="_blank" rel="noopener noreferrer">Read full version blog post </a></p><p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots/">Conversational UX for chatbots</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots/">Conversational UX for chatbots</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
