<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Guillaume Voisine - AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</title>
	<atom:link href="https://www.nuecho.com/author/gvoisine/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.nuecho.com/author/gvoisine/</link>
	<description>Nu Echo</description>
	<lastBuildDate>Mon, 20 Sep 2021 15:26:07 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Mandate: Possible &#8211; The conversational application</title>
		<link>https://www.nuecho.com/mandate-converastional-job-project/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=mandate-converastional-job-project</link>
		
		<dc:creator><![CDATA[Guillaume Voisine]]></dc:creator>
		<pubDate>Thu, 08 Oct 2020 14:00:21 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=7069</guid>

					<description><![CDATA[<p>My Zoom meeting is interrupted by the doorbell. It rings four times, following the usual pattern. I know what that means. Time for a new mandate. I apologize to my fellow agents, exit the session and rush to the door. As expected, no one is there, but I notice a small package on the ground. [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/mandate-converastional-job-project/">Mandate: Possible – The conversational application</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/mandate-converastional-job-project/">Mandate: Possible &#8211; The conversational application</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>My Zoom meeting is interrupted by the doorbell. It rings four times, following the usual pattern.</p>
<p>I know what that means. Time for a new mandate.</p>
<p>I apologize to my fellow agents, exit the session and rush to the door.</p>
<p>As expected, no one is there, but I notice a small package on the ground. I pick it up and go back inside. The envelope is bare. No stamp, no address, no name, nothing.</p>
<p>I return to the couch, only to find Rusty comfortably installed exactly where I was a moment ago.</p>
<p>“Meow”, he says. Translation: “Now it’s my place, human. Deal with it.”</p>
<p>Fine. I sit beside him and tear the package open, freeing an old-school, handheld tape recorder. A familiar voice fills the air of my living room when I press Play:</p>
<p>“Good afternoon, Mr. Project Manager.”</p>
<p>I chuckle. Still not on a first name basis, after all these years?</p>
<p>“Your mandate, should you choose to accept it, is to deliver a conversational application for The Company. They wish to improve their customer experience (CX) by automating their customer service. The application must be able to answer questions with personalized responses, but also execute actions based on customer requests. It should also provide a way for live agents to take over and quickly resolve problematic conversations.”</p>
<p>This seems interesting. Obviously, I will need more information before I can do any planning, but I can already start to build the perfect team to tackle this project.</p>
<p>“The initial requirements will be forwarded to you shortly. I’m certain you will represent the Secretary to the best of your abilities in the execution of this project.”</p>
<p>Rusty, laying on his side, stretches his legs and closes his eyes. “Don’t worry, my friend. Your mandate is to sleep.” I scratch him between the ears, and the cat purrs his approbation.</p>
<p>“This recorder will self-destruct in ten seconds. Good luck.”</p>
<p>Oh no, not that again! I manage to reach my backyard, aim for the garbage bin and throw the recorder just as it starts to combust. That was close. Last time, I was not so lucky, and my house reeked of burned plastic for days.</p>
<p>Back to the couch. Rusty gives me an indignant look, his tail slowly whipping the air as I open my special binder containing the headshots of all the agents at my disposal.</p>
<p>I flip through the photos, looking for one in particular. I need an excellent communicator, like&#8230; There he is, Agent Business Analyst, to be the bridge between the client and the team. His acute sense of observation will serve him well, as he will have to understand the client’s business needs and rules. He will gather requirements from the client and help them define, specify and prioritize those requirements in order to determine which solution suits them best.</p>
<p>Throughout the project, he will leverage his comprehension of the technology and its potential applications to work with the client and the technical team to ensure that requirements are met and that the solution works as defined and as expected.</p>
<p>The very next photo pictures Agent Solution Architect. Yes, I will require her ability to have a global technical perspective on the project. Her deep knowledge of relevant and state-of-the-art technologies will help her advise the client, as well as the technical team, on the best technological choices to meet requirements and comply with any constraints the client may have. She will ensure that all the different pieces of the solution are considered and well-integrated with each other in a robust and effective whole.</p>
<p>Someone will have to define and design the conversation between the end-user and the system. That person must also be an excellent communicator, capable of interacting with all the stakeholders and members of the technical team. I turn to Rusty, who looks slightly less irritated. “What do you think?” I ask him. He yawns. Thanks for the assist, buddy. You’re perfectly right: Agent Conversational User Experience Designer (quite a mouthful. We call him CUX Designer, for short) is the perfect candidate for that. As the one responsible for the end-user experience and UI design, both for text and voice, his task will be to translate business and functional requirements into specific use cases and dialogue flows, as well as detailed functional design and messages. He will also have to validate these with the client and end-users. It will be his responsibility to ensure that the designs meet the client’s requirements, but also account for technical requirements or limitations, including automatic speech recognition (ASR) and natural language understanding (NLU).</p>
<p>Now, I’ll need to make certain that the application understands what the end-user says and correctly interprets what they mean. After all, for a conversational interface to be successful, it is essential that the user input is well understood and accurately interpreted, both globally and in context. I reach the end of the binder and start again from the beginning. Where is she? Ah, there! Agent NLU Scientist. She will work in close collaboration with Agent CUX Designer, as they represent both sides of the same coin: there has to be perfect cohesion between dialogue and NLU for the conversation to be successful. For voice applications, she will also be responsible for configuring and tuning the ASR.</p>
<p>Once the conversational agent is deployed and used by actual people, Agent NLU Scientist will also continue to play a critical role in tuning and improving its ability to understand what the user means.</p>
<p>A part of the team will need to work in materializing the requirements and designs into an actual solution that can be deployed and made accessible to end-users. This is clearly more than a one-person job. I resist the urge of asking Rusty for help again, as he’s drifting off to sleep. Okay then: I will put&#8230; Agent Software Developer, Agent Développeuse Logiciel, Agent Ohjelmistokehittäjä and Agent Softwareentwickler on that task. They are the ones who will implement the dialogue, create the access to the client’s backend systems (this is crucial if we want the application to provide personalized responses or interact with the system on behalf of the user), write unit tests and adapt existing tools like chat widgets for any particular needs of the project. Without developers, a conversational application is nothing more than a concept. Experienced developers can also provide useful feedback to designers and help create successful applications.</p>
<p>To make all the pieces work together, I also need, let’s see… Yes: Agent Integrator. Her broad range of skills, including software and general problem solving, will be instrumental to deliver a functioning solution adapted to the needs of the client. Her generalist approach will help her go through all the troubleshooting that inevitably occurs when integrating large and complex projects.</p>
<p>Nearly there. Beside me, Rusty is snoring, living his best cat dreams. I will require the valuable help of the QA Specialists Squad. They will play an essential role in making sure that the deployed application complies entirely with detailed specifications and meets all requirements. The Squad will interact with designers and developers, but also with the client, supporting them during user acceptance testing phases. They are responsible for test plans and for defining all the detailed test cases, whether manual or automated (which are essential in the context of continuous integration and continuous delivery (CI/CD)). The quality of the deployed application depends a lot on the dedication and professionalism of QA Specialists, as they are the ones who give the final go before deployment.</p>
<p>Yes, that should do it. Time to properly kick start this project. But first, a little cup of tea would be great. As I get up, I notice a thick cloud of smoke rising out of my garbage bin, in the backyard. I sigh under my breath, to avoid waking Rusty. The tea will have to wait. I must deal with that self-destructing (or rather all-destructing) recorder first.</p>
<p>Why can’t the Secretary just send emails, like normal people?</p>
<p>&nbsp;</p>
<p><em>Thank you to my colleagues Linda Thibault and Karine Déry</em></p><p>The post <a href="https://www.nuecho.com/mandate-converastional-job-project/">Mandate: Possible – The conversational application</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/mandate-converastional-job-project/">Mandate: Possible &#8211; The conversational application</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Dialogflow distilled: On error handling</title>
		<link>https://www.nuecho.com/dialogflow-distilled-on-error-handling/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=dialogflow-distilled-on-error-handling</link>
		
		<dc:creator><![CDATA[Guillaume Voisine]]></dc:creator>
		<pubDate>Mon, 17 Jun 2019 11:26:48 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Dialogflow]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3924</guid>

					<description><![CDATA[<p>We said it before, and we’ll say it again: error handling is a crucial element of the conversational UX for chatbot.</p>
<p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-error-handling/">Dialogflow distilled: On error handling</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-error-handling/">Dialogflow distilled: On error handling</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>We said it before, and we’ll say it again: error handling is a crucial element of the conversational UX for chatbot. In <a href="https://medium.com/cxinnovations/conversational-ux-for-chatbots-ca8cc8e08ea" target="_blank" rel="noopener">a previous post</a>, we identified two primordial characteristics of good error handling. Firstly, it should be <em>contextual</em>, by avoiding generic messages (like “I’m sorry, I didn’t understand that”) and ensuring that error prompts are always relevant in the context of the dialogue. Secondly, it should be <em>progressive</em>, which consists of giving different error messages if the bot doesn’t recognize the user query multiple times in a row, each time escalating towards more exhaustive answers. Ideally, your error handling should be both contextual and progressive, but it’s easier said than done, as these kinds of behavior demand design considerations that can have profound repercussions on how a bot is implemented.</p>
<h2>Error handling in vanilla Dialogflow: follow-up intents and fallbacks</h2>
<p>Dialogflow uses <a href="https://dialogflow.com/docs/intents/default-intents#default_fallback_intent" target="_blank" rel="noopener">fallback intents</a> when it fails to associate a user input with an intent. This, coupled with the <a href="https://dialogflow.com/docs/contexts/follow-up-intents" target="_blank" rel="noopener">follow-up intent</a> mechanism, allows for contextual error handling. For example, suppose a very simple agent containing these intents :</p>
<p><img fetchpriority="high" decoding="async" class="wp-image-3929 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/Default-fallback-intent.jpg" alt="" width="331" height="164" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/Default-fallback-intent.jpg 331w, https://www.nuecho.com/wp-content/uploads/2019/06/Default-fallback-intent-300x149.jpg 300w" sizes="(max-width: 331px) 100vw, 331px" /><br />
<span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> is a normal intent that will be triggered by specific user inputs (in this case, a <a href="https://dialogflow.com/docs/intents/training-phrases" target="_blank" rel="noopener">training phrase</a> for this intent could be, for example, “I want to do the thing”). Intents can also be triggered by <a href="https://dialogflow.com/docs/events" target="_blank" rel="noopener">events</a>, but it’s not really relevant to the current discussion.</p>
<p>The <span style="font-family: 'Courier New', Courier, monospace;">DefaultFallbackIntent</span> is created automatically when you generate a new Dialogflow agent, and is triggered when Dialogflow can’t match the user input with anything else. You can think of the <span style="font-family: 'Courier New', Courier, monospace;">DefaultFallbackIntent</span> as a safety net that will catch everything, but only as a last resort. Custom fallback intents can also be created, as we’ll see.</p>
<p>In Dialogflow, it’s possible to declare intents as follow-up to other intents. To continue with our same example, let’s suppose that the agent ask the user if he’s sure that he wants to do the thing when <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> is matched:</p>
<p><img decoding="async" class="wp-image-3941 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing.jpg" alt="" width="579" height="359" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing.jpg 579w, https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-480x298.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 579px, 100vw" /></p>
<p><span style="font-family: 'Courier New', Courier, monospace;">DoTheThink-yes</span> and <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-no</span> are follow-up intents: they can only be matched if <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> was previously matched. The way Dialogflow determines that is by keeping track of <a href="https://dialogflow.com/docs/contexts" target="_blank" rel="noopener">contexts</a>. Specifically, when <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> is matched, a new context, <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-followup</span>, is generated. This context is then considered as one of the conditions to trigger <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-yes</span> or <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-no</span>, as we can see in this screenshot:</p>
<p><img decoding="async" class="wp-image-3943 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-yes.jpg" alt="" width="486" height="306" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-yes.jpg 486w, https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-yes-480x302.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 486px, 100vw" /></p>
<p>Finally, there is the follow-up fallback, <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-fallback</span>. This kind of fallback works the same way as the <span style="font-family: 'Courier New', Courier, monospace;">DefaultFallbackIntent</span>, but will only be operational when a specific context is active (<span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-followup</span>), like with any follow-up intent.</p>
<p>This allows a Dialogflow agent to have contextual error messages for virtually any situation: simply create follow-up fallbacks for every point in your dialogue where the user could say something that is not supported by your agent (pro-tip: most of the time, it’s all the time). You can even implement <em>progressive</em> error handling with this, since follow-up fallback intents can be daisy-chained to ensure proper escalation of error prompts:</p>
<p><img decoding="async" class="wp-image-3945 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-fallback.jpg" alt="" width="575" height="429" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-fallback.jpg 575w, https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-fallback-480x358.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 575px, 100vw" /></p>
<p>Here, if a user triggers <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> and then says something that is not recognized, <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-fallback</span> will handle the situation. If the user then says something that is contextually correct (something that would match <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-yes</span> or <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-no</span>), the dialogue will continue normally. If they say another unsupported input, the agent will use the next follow-up fallback, <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-fallback-fallback</span>.</p>
<h2>The problem with slot filling error handling</h2>
<p>So all is good, right? After all, we just demonstrated that you can do anything you want with follow-ups and fallbacks. Well, <em>almost</em> anything. Most notably, this approach will not work with<a href="https://dialogflow.com/docs/concepts/slot-filling" target="_blank" rel="noopener"> slot filling</a>. But before we tell you why and what can be done about it, we need to take a closer look at how slot filling is handled in Dialogflow.</p>
<p>First of all, what is slot filling? It is a simple conversational pattern where a bot is given a list of information to obtain (or <em>slots</em> to <em>fill</em>). Usually, each of these slots will correspond to an entity that the chatbot will try to extract from the user’s input. When a slot is considered filled, the bot will simply move to the next slot, until they’re all filled.</p>
<p>In Dialogflow, slot filling is tied to a given intent. In other words, a user will first need to trigger a specific intent for the agent to start the slot filling. One important detail to note is that the agent will <em>stay</em> on the same intent for the duration of the slot filling.</p>
<p>Let’s see how it works with this image taken from Dialogflow’s documentation:</p>
<p><img decoding="async" class="wp-image-3947 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/Action-Parameters.jpg" alt="" width="713" height="440" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/Action-Parameters.jpg 713w, https://www.nuecho.com/wp-content/uploads/2019/06/Action-Parameters-480x296.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 713px, 100vw" /></p>
<p>The agent is given a list of three slots to fill : <span style="font-family: 'Courier New', Courier, monospace;">number-int, color </span>and <span style="font-family: 'Courier New', Courier, monospace;">size</span>, each of a different entity type. When the user triggers the intent that contains the slot filling, the agent will ask the user for each slot, in order. By default, the question is in the form “What is SLOT_NAME?”, but customized prompts can be defined for each slot.</p>
<p>What constitutes an error in slot filling is when the agent is not able to extract the expected entity from a user input. In this situation, by default, the Dialogflow agent will simply repeat the question until it receives a valid answer. Forever.</p>
<p>Error handling in slot filling is problematic because we can’t use the strategies that work in other circumstances (follow-up intents and fallbacks), since slot filling always happens <em>within</em> a single intent, and these strategies are built on top of the intent detection mechanism. It results in a very basic error handling for slot filling, and while it at least provides the possibility of continuing the dialogue, there is simply no error message guiding the user, which is not really interesting in terms of conversational UX.</p>
<p>Let’s try to fix that.</p>
<h2>Detecting slot filling error context</h2>
<p>(Please note that for the purpose of this article, we’ll suppose that the fulfillment is deployed in a serverless execution environment such as Google Cloud Function, which must remain stateless. Other approaches are of course possible.)</p>
<p>The first step to implement error handling in slot filling is to properly configure your agent to use your fulfillment webhook, and to activate calls to your fulfillment for the slot filling in your intent:</p>
<p><img decoding="async" class="wp-image-3949 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/Fulfillment.jpg" alt="" width="552" height="249" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/Fulfillment.jpg 552w, https://www.nuecho.com/wp-content/uploads/2019/06/Fulfillment-480x217.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 552px, 100vw" /></p>
<p>That was easy enough. The next step is to make your fulfillment code detect occurrences of errors in the slot-filling. The problem is that Dialogflow doesn’t communicate that directly. The only information that is clearly exposed in the fulfillment is the detected intent (which doesn’t change for the duration of the slot filling). But you can also extract from the contexts the current slot that the dialogue is trying to fill (more on that later). Even if there is no mention of any failed entity extraction, we already have everything we need to implement slot filling error detection.</p>
<p>The idea is to use contexts to leave a trace of every time slot filling is attempted for any given slot. Here are the main steps for slot filling error detection:</p>
<ol>
<li>Detect if you are in slot filling</li>
<li>If so, construct an error ID from the intent’s name and the current slot’s name</li>
<li>Check if a context with that name already exists:
<ol type="a">
<li>If it doesn’t, create it and continue as normal</li>
<li>If it does, you just detected an error</li>
</ol>
</li>
</ol>
<p>How does this work? Well, by making sure to create a custom context every time your agent tries to fill a slot, you leave evidence of that particular attempt. If the slot is properly filled, the agent will move to the next slot, and your fulfillment will generate another error ID (different from the one from before, since the slot is different). But if a call to your fulfillment is made with your custom context already created, it means that the same slot is trying to be filled again, which means that the user didn’t respond correctly to the slot query. In other words, you just detected an error in slot filling!</p>
<p>Let’s see how each step can be implemented:</p>
<p>One easy way to validate that you are indeed in slot filling is to use the <a href="https://dialogflow.com/docs/reference/fulfillment-library/webhook-client" target="_blank" rel="noopener">WebhookClient</a> and look in <span style="font-family: 'Courier New', Courier, monospace;">client.contexts</span> for a context (automatically generated by Dialogflow) whose name starts with <span style="font-family: 'Courier New', Courier, monospace;">`${INTENT}_dialog_params_`</span>, where <span style="font-family: 'Courier New', Courier, monospace;">INTENT</span> is the name of the current intent, in lower cases. If the context doesn’t exist, you are not in slot filling. If it does, you are, and you can extract the name of the current slot from the end of that context’s name (immediately after “dialog_params_”).</p>
<p>And now, to create the error ID: since we can find the name of the intent and the name of the current slot, we can use this information to forge a unique error ID for each slot in the dialogue. Doesn’t have to be fancy, something like <span style="font-family: 'Courier New', Courier, monospace;">`error_${INTENT}_${SLOT}`</span> will do the trick.</p>
<p>To check if our custom context already exists, we can use the following function from the WebhookClient:</p>
<p style="padding-left: 40px;"><span style="font-family: 'Courier New', Courier, monospace;">client.getContext(errorID);</span></p>
<p>That’s pretty much self explanatory. As for context creation:</p>
<p style="padding-left: 40px; font-family: 'Courier New', Courier, monospace;">client.setContext({<br />
name: `error_${INTENT}_${SLOT}`,<br />
lifespan: 1<br />
});</p>
<p>Here, we set the lifespan of the custom error context to 1, to make it disappear as soon as possible when it’s not needed anymore.</p>
<p>And that’s it for slot filling context error detection. From there, you already have most of the things you need to implement actual error handling: you just have to override the original bot response with a contextually appropriate message, which can be done with this simple command:</p>
<p style="padding-left: 40px;"><span style="font-family: 'Courier New', Courier, monospace;">client.add(message);</span></p>
<p>Of course, you now need a way to retrieve or generate these contextually correct messages, from within the fulfillment. In other words, you need to do that programmatically. A good way to implement that while keeping your code clean and language agnostic is to leverage the error ID you created as a key and use a localization library to consolidate all your error messages. From there, it’s a simple matter of calling the localization library with the error ID to have it resolved to a proper error message.</p>
<h2>Going progressive</h2>
<p>Now that we have implemented contextual error handling for the slot filling, let’s improve it by making it progressive. An easy way to do that is to add a <span style="font-family: 'Courier New', Courier, monospace;">count</span> parameter to your custom context:</p>
<p style="padding-left: 40px; font-family: 'Courier New', Courier, monospace;">client.setContext({<br />
name: `error_${INTENT}_${SLOT}`,<br />
lifespan: 1<br />
parameters: { count: 1 }<br />
});</p>
<p>From there, if an error occurs, you can append the <span style="font-family: 'Courier New', Courier, monospace;">count</span> to the error ID (<span style="font-family: 'Courier New', Courier, monospace;">`${errorID}_${COUNT}`</span>) and use that new ID template to identify your error messages in your localization library (or whatever similar strategy you use) and thus have any number of distinct error messages for any given context. Simply don’t forget to increment the count value in the context each time you detect an error by calling <span style="font-family: 'Courier New', Courier, monospace;">setContext</span> again. You may also want to implement a cap to that number, to avoid having a user reaching an error message that was not implemented.</p>
<h2>Next steps</h2>
<p>This concludes our short discussion on slot filling error handling in Dialogflow. Of course, the proposed implementation is really bare bone and doesn’t support really useful functionalities like constraints, disambiguation (“Did you mean A or B?”) or certain particular situations (things could be a little more complex if you decided to activate the <a href="https://cloud.google.com/dialogflow-enterprise/docs/knowledge-connectors" target="_blank" rel="noopener">Knowledge Base feature</a>, for example). These issues will most probably be covered in one of our future blog posts about Dialogflow. Thanks for reading!</p><p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-error-handling/">Dialogflow distilled: On error handling</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-error-handling/">Dialogflow distilled: On error handling</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Conversational UX for chatbots &#8211; part 2</title>
		<link>https://www.nuecho.com/conversational-ux-for-chatbots-part-2/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=conversational-ux-for-chatbots-part-2</link>
		
		<dc:creator><![CDATA[Guillaume Voisine]]></dc:creator>
		<pubDate>Fri, 29 Mar 2019 17:35:50 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVA]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3187</guid>

					<description><![CDATA[<p>An overview of essential discourse patterns, part 2 Counter-proposals A conversation with a chatbot doesn’t have to follow a simple question-answer structure. For example, bots can offer suggestions to the user: this paves the way to even more complex interactions. This means more fluid and natural conversations, but also that more efforts need to be [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots-part-2/">Conversational UX for chatbots – part 2</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots-part-2/">Conversational UX for chatbots &#8211; part 2</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>An overview of essential discourse patterns, part 2</h2>
<h3>Counter-proposals</h3>
<p>A conversation with a chatbot doesn’t have to follow a simple question-answer structure. For example, bots can offer suggestions to the user: this paves the way to even more complex interactions. This means more fluid and natural conversations, but also that more efforts need to be made in the dialogue’s design.</p>
<p>If your bot suggests actions or responses, one thing that should be considered is the potential desire for the user to modify them. This is what we call a counter-proposal. It follows the following structure:</p>
<ul>
<li>Chatbot makes a suggestion</li>
<li>User refuses the suggestions and modifies it</li>
<li>Chatbot acknowledges the correction.</li>
</ul>
<blockquote><p><em>To feel organic, a good counter-proposal should require exactly one interaction from the user, immediately after the bot’s proposal.</em></p></blockquote>
<p>The simple fact that the user is correcting the new value implies that the original proposal is refused. Of course, you could always achieve the same effect with more steps:</p>
<ol>
<li>Chatbot makes a suggestion</li>
<li>User refuses it</li>
<li>Chatbot asks what user wants</li>
<li>User says it</li>
<li>Chatbot acknowledges.</li>
</ol>
<p>But it feels contrived, and only supporting that structure could lead to a scenario where the user would have to repeat the same information twice. This is a cardinal sin in chatbot-land:</p>
<p><img decoding="async" class="wp-image-1820 alignnone size-medium" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-300x284.png" alt="" width="300" height="284" srcset="https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-300x284.png 300w, https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land.png 382w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p>Feels more artificial than intelligent, right? This is why it is not recommended to implement bot suggestions without support for counter-proposals if you want to offer a more natural conversational experience, like so:</p>
<p><img decoding="async" class="wp-image-1822 alignnone size-medium" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-2-300x210.png" alt="" width="300" height="210" srcset="https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-2-300x210.png 300w, https://www.nuecho.com/wp-content/uploads/2019/03/chatbot-land-2.png 388w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<h3>Contextual constraints for entities</h3>
<p>Out of the box, most conversational frameworks (like <a target="_blank" href="https://dialogflow.com/" rel="noopener">Google Dialogflow</a> or <a target="_blank" href="https://www.ibm.com/watson/" rel="noopener">IBM Watson</a>) offer some ways to control what information can be extracted as entities in the dialogue. Typically, there are two options: use already defined system entities or create custom entities, most of the time by writing a list of possible values, although some platforms also accept regular expressions to delimit what can be extracted as a given entity. Here is an example of Entity declaration in Dialogflow:</p>
<p><img decoding="async" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/03/Dialogflow.png" /></p>
<p>This is all very nice and useful, but it can be quite lacking in terms of flexibility. While the type of an entity is a pertinent information, in a lot of situations, it’s not in itself enough to control the flow of the dialogue. Suppose you want your bot to ask for a date to, let’s say, book a flight. To check if the answer given by the user is actually a date is one thing, but it’s another to verify that the date is valid in the context of the conversation. Here, one constraint would be that the date needs to be in the future. But in another conversation, it could be perfectly valid (or even expected) for the user to give a date in the past (like if he’s asked for his birthdate). The constraints can also be defined by a dependency on another entity. To keep the flight booking example, if the bot asks for a return date, it stands to reason that it must be posterior to the departure date.</p>
<blockquote><p><em>Constraints check on entities is a basic conversational pattern that can be very useful to control the flow of a dialogue.</em></p></blockquote>
<p>Of course, like any conversational concept, constraints checking is not a silver bullet; it is but a piece of a larger puzzle.</p>
<h3>Digressions</h3>
<p>According to the Oxford English Dictionary, a digression is “a temporary departure from the main subject in speech or writing.” This is something we humans do everyday: a quick discursive detour to ask for more information or inject a little by-the-way to an ongoing conversation. Our brain is naturally wired to easily handle this kind of context switching. Most dialogue models for chatbots, sadly, are not.</p>
<p>It’s a shame, really, because digressions should not be considered as an optional feature, but as a cornerstone of dialogue design.</p>
<blockquote><p><em>The ability for a bot to handle multiple concurrent dialogue contexts is fundamental to create a believable conversational virtual agent.</em></p></blockquote>
<p>Without this, chatbots feel very limited, constrained to a specific discursive path from which the user is not really permitted to stray. Of course, digression support is not magic either, and chatbots, especially task-oriented ones, will probably always be restrained, at least to some extent, to a relatively small conversational perimeter. But supporting digression is mostly about empowering the users by giving them more control on the flow and the shape of the dialogue.</p>
<p>There are a lot of interesting use cases for digression. One of them is informational query, where the user needs the bot to give them some crucial details before they can make a decision. This can be coupled with bots proposal (and, possibly, counter proposal)</p>
<p><img decoding="async" class="wp-image-1826 alignnone size-medium" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/03/bots-proposal-293x300.png" alt="" width="293" height="300" srcset="https://www.nuecho.com/wp-content/uploads/2019/03/bots-proposal-293x300.png 293w, https://www.nuecho.com/wp-content/uploads/2019/03/bots-proposal.png 374w" sizes="(max-width: 293px) 100vw, 293px" /></p>
<p>When the user asks how much money he has in his account, he doesn’t really want to change the subject: he just needs more data (in this case, how much money is available in his savings account) in order to make an informed decision. This can be a very useful tool to improve the user-friendliness of a chatbot. Also, we can see from this example that dialogue patterns are not components that are to be integrated in isolation; they can mesh together to provide a more pleasant flow to the conversation.</p>
<p>Hopefully you have gained some knowledge about conversational design and why it matters by reading this article. Thanks to my colleagues <a target="_blank" href="https://medium.com/@linda.thibault" rel="noopener">Linda Thibault</a>, <a target="_blank" href="https://medium.com/@pdeschen" rel="noopener">Pascal Deschênes</a> and Karine Déry for their precious input.</p><p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots-part-2/">Conversational UX for chatbots – part 2</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots-part-2/">Conversational UX for chatbots &#8211; part 2</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Conversational UX for chatbots</title>
		<link>https://www.nuecho.com/conversational-ux-for-chatbots/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=conversational-ux-for-chatbots</link>
		
		<dc:creator><![CDATA[Guillaume Voisine]]></dc:creator>
		<pubDate>Tue, 05 Feb 2019 15:21:08 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVA]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3128</guid>

					<description><![CDATA[<p>An overview of essential discourse patterns, part 1 Here at Nu Echo, we’ve been involved in the conversational space for quite some time now. One of the things we learned is that while creating a simple chatbot may take a few days (or even just a few minutes), creating one that istruly conversational requires a [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots/">Conversational UX for chatbots</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots/">Conversational UX for chatbots</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2 >An overview of essential discourse patterns, part 1</h2>
<p>Here at Nu Echo, we’ve been involved in the conversational space for quite some time now. One of the things we learned is that while creating a simple chatbot may take a few days (or even just a few minutes), <a class="markup--anchor markup--p-anchor external" href="https://medium.com/cxinnovations/building-a-truly-conversational-chatbot-takes-more-than-30-minutes-e210412a49a1" target="_blank" rel="noopener noreferrer" data-href="https://medium.com/cxinnovations/building-a-truly-conversational-chatbot-takes-more-than-30-minutes-e210412a49a1">creating one that is<em class="markup--em markup--p-em">truly</em> conversational requires a lot more time</a> and expertise.</p>
<p id="6fec" class="graf graf--p graf-after--p">The purpose of this article is to present a list of the most important discourse patterns required to build what we consider a good conversational chatbot. This list is not exhaustive, but even then, it was quite long, so we decided to split it in multiple parts. This one will focus primarily on error handling and error messages.</p>
<p>Please note that we will only talk about task-oriented chatbots (also called <em class="markup--em markup--p-em">transactional chatbots</em>), i.e. bots that are designed to accomplish a task or a set of tasks, as typically opposed to chit-chat bots, whose primary objective is to maintain an organic conversation as long as possible. That second type of chatbot presents <a class="markup--anchor markup--p-anchor external" href="https://medium.com/r/?url=https%3A%2F%2Fonlim.com%2Fen%2Fchit-chat-chatbots-and-how-to-make-them-better%2F" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fonlim.com%2Fen%2Fchit-chat-chatbots-and-how-to-make-them-better%2F">its own set of very interesting challenges</a>, but it will not be the subject of this series of articles. We also won’t talk about implementation, as it can greatly differ depending on the technology that is used for development.</p>
<h4>Contextual and progressive error handling</h4>
<p>Have you ever tried to interact with a bot, only to hit a conversational wall?</p>
<p>&gt; <a class="external" href="https://medium.com/cxinnovations/conversational-ux-for-chatbots-ca8cc8e08ea" target="_blank" rel="noopener noreferrer">Read full version blog post </a></p><p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots/">Conversational UX for chatbots</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/conversational-ux-for-chatbots/">Conversational UX for chatbots</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
