<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Dialogflow - AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</title>
	<atom:link href="https://www.nuecho.com/category/dialogflow/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.nuecho.com/category/dialogflow/</link>
	<description>Nu Echo</description>
	<lastBuildDate>Thu, 15 Sep 2022 14:52:33 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Voice agents VS. Chatbots: Where does the difference lie?</title>
		<link>https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=voice-agents-vs-chatbots-where-does-the-difference-lie</link>
		
		<dc:creator><![CDATA[Karine Dery]]></dc:creator>
		<pubDate>Wed, 14 Sep 2022 16:27:02 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Dialogflow]]></category>
		<category><![CDATA[Chatbot]]></category>
		<category><![CDATA[contact center automation]]></category>
		<category><![CDATA[contact center virtual agent]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[Conversational design]]></category>
		<category><![CDATA[DialogFlow]]></category>
		<category><![CDATA[NLP model]]></category>
		<category><![CDATA[NLU model]]></category>
		<category><![CDATA[use cases virtual agents]]></category>
		<category><![CDATA[virtual agent]]></category>
		<category><![CDATA[Voicebot]]></category>
		<category><![CDATA[voicebot persona]]></category>
		<guid isPermaLink="false">https://www.nuecho.com/?p=9487</guid>

					<description><![CDATA[<p>In our field of work, we often hear “Once we’re done with the voice assistant, we’ll just use the dialog to add a chatbot on our website!” or “now that our chatbot is done, it will be a piece of cake to make a voice bot”. Seemingly, it looks like we would only need to [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/">Voice agents VS. Chatbots: Where does the difference lie?</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/">Voice agents VS. Chatbots: Where does the difference lie?</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">In our field of work, we often hear “Once we’re done with the voice assistant, we’ll just use the dialog to add a chatbot on our website!” or “now that our chatbot is done, it will be a piece of cake to make a voice bot”. Seemingly, it looks like we would only need to add or remove the speech processing (</span><i><span style="font-weight: 400;">speech-to-text</span></i><span style="font-weight: 400;">, STT) and speech synthesis (</span><i><span style="font-weight: 400;">text-to-speech</span></i><span style="font-weight: 400;">, TTS) layers to magically transform a chatbot into a voicebot and vice versa (by the wave of a magic wand).</span></p>
<p><span style="font-weight: 400;">Based on our experience, we would also describe such a simple transformation as magic!</span></p>
<p><span style="font-weight: 400;">Through this blogpost, I will illustrate why with a few counterexamples.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">Generating the output</span></h2>
<h3><span style="font-weight: 400;">Presenting complex information</span></h3>
<p><span style="font-weight: 400;">Within a chatbot, text information can be enriched with images, hyperlinks, slideshows, etc. Some use cases such as navigation assistance or purchase recommendations would seem impossible to implement without those tools.</span></p>
<p><span style="font-weight: 400;">In other cases, several voice interactions would be required to reach the same result as a single visual output. For example, here is my best shot at transforming the output of a appointment scheduling chatbot for a voicebot: </span></p>
<p><span style="font-weight: 400;"><img decoding="async" class="aligncenter wp-image-9488 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/rdv-c-en.png" alt="" width="358" height="522" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/rdv-c-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/rdv-c-en-206x300.png 206w" sizes="(max-width: 358px) 100vw, 358px" /></span></p>
<p><img decoding="async" class="aligncenter wp-image-9490 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/rdv-v-en.png" alt="" width="358" height="393" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/rdv-v-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/rdv-v-en-273x300.png 273w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h3></h3>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Trail of previous interactions</span></h3>
<p><span style="font-weight: 400;">What does a chatbot do if the user is not paying attention, has poor memory, or has forgotten to put on their glasses? Nothing! The output remains there for the user to re-read as they see fit, which makes certain cases that are absolutely necessary in a verbal interaction become completely useless in a written conversation:</span></p>
<p><img decoding="async" class="aligncenter wp-image-9492 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/repeter-en.png" alt="" width="358" height="397" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/repeter-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/repeter-en-271x300.png 271w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Persona and voice features</span></h3>
<p><span style="font-weight: 400;">The persona (demographics, language level, personality) of the virtual agent, as well as its consistency, are important in both modes. While in text mode you have to think about the visual output of the chatbot, in voice mode, you have to look for a voice that represents the desired characteristics, while being natural, and this can limit our options. For example, trying to create an informal voice agent can be near impossible, especially when using TTS instead of a recorded voice (which also has its limitations).</span></p>
<audio class="wp-audio-shortcode" id="audio-9487-1" preload="none" style="width: 100%;" controls="controls"><source type="audio/wav" src="https://www.nuecho.com/wp-content/uploads/2022/09/voicebot_cool-en.wav?_=1" /><a href="https://www.nuecho.com/wp-content/uploads/2022/09/voicebot_cool-en.wav">https://www.nuecho.com/wp-content/uploads/2022/09/voicebot_cool-en.wav</a></audio>
<p>&nbsp;</p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Support of multiple channels</span></h3>
<p><span style="font-weight: 400;">Finally, even if our use cases are channel-agnostic, our personna very simple and our agent very talkative, it is clear that we must at least be able to play different messages depending on the channel so that SSML is included in audio messages. Unfortunately, some dialog engines hardly support multiple channels and this can greatly increase the challenges of implementing a common agent for both voice and text.</span></p>
<p><img decoding="async" class="aligncenter wp-image-9496 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/ssml-en.png" alt="" width="358" height="364" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/ssml-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/ssml-en-295x300.png 295w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h2></h2>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">Input interpretation</span></h2>
<p><span style="font-weight: 400;">“What about the other way around? The user won’t send images or carousels of images to the chatbot. For sure, interpreting the input can’t be that different.” I will answer with a dramatic example. Let’s look at Bob who is trying to express what he needs to a vocal agent:</span></p>
<p><img decoding="async" class="aligncenter wp-image-9498 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/bob-en.png" alt="" width="677" height="764" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/bob-en.png 677w, https://www.nuecho.com/wp-content/uploads/2022/09/bob-en-480x542.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 677px, 100vw" /></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">Of course, Bob and his legendary bad luck are not real, but the cases I have presented are taken from real-life examples. Even though some STT models can now ignore “mhms”, noises and secondary voices, the transcription will still have its share of errors.</span></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Uncertainty</span></h3>
<p><span style="font-weight: 400;">There are ways to reduce these errors or their impacts, whether it’s through the configuration of the engine, systematic changes of the transcription, or the adaptation of the NLP model to the sentences received. There remains, however, an additional uncertainty related to the STT which must be taken into account in the development of a voice application.</span></p>
<p>&nbsp;</p>
<h4><span style="font-weight: 400;">Strategies for dealing with uncertainty</span></h4>
<p><span style="font-weight: 400;">To increase our confidence in the interpretation of the input, we will use more strategies for dealing with uncertainty in the dialogue of a vocal agent than in the dialogue of a textual agent. </span></p>
<p><span style="font-weight: 400;">For example, we can think of:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Add a step to explicitly or implicity confirm an intent or an entity</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Add a step to disambiguate the input when intentions are too similar</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Enable changes or fixes</span></li>
</ul>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter wp-image-9501 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en.png" alt="" width="358" height="393" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en-273x300.png 273w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h4></h4>
<p>&nbsp;</p>
<h4><span style="font-weight: 400;">Choosing use cases</span></h4>
<p><span style="font-weight: 400;">Addresses, emails or people’s names are difficult pieces of information to transcribe correctly for many reasons, but they present lesser challenges in writing. If some of these pieces of information are critical for a use case, it could be very complex, risky, or inappropriate for the user experience to implement it though a vocal agent..</span></p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter wp-image-9503 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/courriel-en.png" alt="" width="358" height="358" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/courriel-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/courriel-en-300x300.png 300w, https://www.nuecho.com/wp-content/uploads/2022/09/courriel-en-150x150.png 150w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h2></h2>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">Real-time management</span></h2>
<p><span style="font-weight: 400;">The last big difference between voice and text conversations is time management. A text conversation is asynchronous: the input is received in one block, and the response that follows is sent in one block. The audio, on the other hand, is transmitted continuously, so the time must be managed accordingly.</span></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Short response time and user experience</span></h3>
<p><span style="font-weight: 400;">In a vocal conversation, we are expecting a response within a few tenths of a second, while in text mode, it is completely normal to wait for much longer. Long silences on the phone are uncomfortable, and even if it is possible to play sounds or music-on-hold, between two interactions, the “&#8230;” hint cannot be replaced. It is therefore much more critical to ensure that the system is fast and to warn the user in case of a longer operation in voice mode.</span></p>
<h3></h3>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Interruptions</span></h3>
<p><span style="font-weight: 400;">Because voice output has a duration, the user can try to interrupt a voice agent. Supporting interruption correctly involves additional technical complexity, but also has additional impact on the dialogue. For example, we want to make the assumption that if the user says “yes” when presenting several options, this means that he chooses the first one, and we will support this case.</span></p>
<p>&nbsp;</p>
<p><img decoding="async" class="wp-image-9505 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en-1.png" alt="" width="358" height="393" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en-1.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en-1-273x300.png 273w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">User Silence</span></h3>
<p><span style="font-weight: 400;">Although a virtual agent isn’t discomforted by silences, the treatment of what is commonly called a </span><i><span style="font-weight: 400;">no-input</span></i><span style="font-weight: 400;"> differs greatly depending on the mode of communication. In a voice conversation, a few seconds of silence usually means the user is hesitating or their voice is too low; an appropriate help message will therefore be played.</span></p>
<p><span style="font-weight: 400;">In text mode, it is useless to harass the user with error messages because the absence of input is treated like any inaction on a website: after a determined time, the user will be disconnected if necessary, and the conversation is ended.</span></p>
<p>&nbsp;</p>
<p><img decoding="async" class="size-full wp-image-9507 aligncenter" src="https://www.nuecho.com/wp-content/uploads/2022/09/no-input-en.png" alt="" width="358" height="377" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/no-input-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/no-input-en-285x300.png 285w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h2></h2>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">So, finally…</span></h2>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">How then does one answer the question: “What can be reused from a voice agent to create a chatbot or vice versa?” The answer is very nuanced and a little disappointing. Switching from a voice agent to a chatbot will generally allow more reuse because the former is generally more restrictive: perhaps it will be enough to adapt the messages a little, to add or remove a few dialogue paths.</span></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">However, in both cases, it is important to take a step back and re-evaluate our use cases and our persona: are they appropriate, feasible and realistic on this new channel? For what comes out of this questioning, business rules and high-level flows of the dialogue can probably be reused. The NLU model (textual data, organization of intentions and entities) and the messages of one may serve as a basis for the other, but will be subject to change. Indeed, the approach will have to be adapted to the results of user tests and data collection, so that the user experience does not suffer in favor of the simplicity of development.</span></p>
<p>&nbsp;</p>
<p>&nbsp;</p><p>The post <a href="https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/">Voice agents VS. Chatbots: Where does the difference lie?</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/">Voice agents VS. Chatbots: Where does the difference lie?</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		<enclosure url="https://www.nuecho.com/wp-content/uploads/2022/09/voicebot_cool-en.wav" length="0" type="audio/wav" />

			</item>
		<item>
		<title>Dialogflow distilled: On error handling</title>
		<link>https://www.nuecho.com/dialogflow-distilled-on-error-handling/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=dialogflow-distilled-on-error-handling</link>
		
		<dc:creator><![CDATA[Guillaume Voisine]]></dc:creator>
		<pubDate>Mon, 17 Jun 2019 11:26:48 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Dialogflow]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3924</guid>

					<description><![CDATA[<p>We said it before, and we’ll say it again: error handling is a crucial element of the conversational UX for chatbot.</p>
<p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-error-handling/">Dialogflow distilled: On error handling</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-error-handling/">Dialogflow distilled: On error handling</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>We said it before, and we’ll say it again: error handling is a crucial element of the conversational UX for chatbot. In <a href="https://medium.com/cxinnovations/conversational-ux-for-chatbots-ca8cc8e08ea" target="_blank" rel="noopener">a previous post</a>, we identified two primordial characteristics of good error handling. Firstly, it should be <em>contextual</em>, by avoiding generic messages (like “I’m sorry, I didn’t understand that”) and ensuring that error prompts are always relevant in the context of the dialogue. Secondly, it should be <em>progressive</em>, which consists of giving different error messages if the bot doesn’t recognize the user query multiple times in a row, each time escalating towards more exhaustive answers. Ideally, your error handling should be both contextual and progressive, but it’s easier said than done, as these kinds of behavior demand design considerations that can have profound repercussions on how a bot is implemented.</p>
<h2>Error handling in vanilla Dialogflow: follow-up intents and fallbacks</h2>
<p>Dialogflow uses <a href="https://dialogflow.com/docs/intents/default-intents#default_fallback_intent" target="_blank" rel="noopener">fallback intents</a> when it fails to associate a user input with an intent. This, coupled with the <a href="https://dialogflow.com/docs/contexts/follow-up-intents" target="_blank" rel="noopener">follow-up intent</a> mechanism, allows for contextual error handling. For example, suppose a very simple agent containing these intents :</p>
<p><img decoding="async" class="wp-image-3929 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/Default-fallback-intent.jpg" alt="" width="331" height="164" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/Default-fallback-intent.jpg 331w, https://www.nuecho.com/wp-content/uploads/2019/06/Default-fallback-intent-300x149.jpg 300w" sizes="(max-width: 331px) 100vw, 331px" /><br />
<span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> is a normal intent that will be triggered by specific user inputs (in this case, a <a href="https://dialogflow.com/docs/intents/training-phrases" target="_blank" rel="noopener">training phrase</a> for this intent could be, for example, “I want to do the thing”). Intents can also be triggered by <a href="https://dialogflow.com/docs/events" target="_blank" rel="noopener">events</a>, but it’s not really relevant to the current discussion.</p>
<p>The <span style="font-family: 'Courier New', Courier, monospace;">DefaultFallbackIntent</span> is created automatically when you generate a new Dialogflow agent, and is triggered when Dialogflow can’t match the user input with anything else. You can think of the <span style="font-family: 'Courier New', Courier, monospace;">DefaultFallbackIntent</span> as a safety net that will catch everything, but only as a last resort. Custom fallback intents can also be created, as we’ll see.</p>
<p>In Dialogflow, it’s possible to declare intents as follow-up to other intents. To continue with our same example, let’s suppose that the agent ask the user if he’s sure that he wants to do the thing when <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> is matched:</p>
<p><img decoding="async" class="wp-image-3941 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing.jpg" alt="" width="579" height="359" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing.jpg 579w, https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-480x298.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 579px, 100vw" /></p>
<p><span style="font-family: 'Courier New', Courier, monospace;">DoTheThink-yes</span> and <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-no</span> are follow-up intents: they can only be matched if <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> was previously matched. The way Dialogflow determines that is by keeping track of <a href="https://dialogflow.com/docs/contexts" target="_blank" rel="noopener">contexts</a>. Specifically, when <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> is matched, a new context, <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-followup</span>, is generated. This context is then considered as one of the conditions to trigger <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-yes</span> or <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-no</span>, as we can see in this screenshot:</p>
<p><img decoding="async" class="wp-image-3943 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-yes.jpg" alt="" width="486" height="306" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-yes.jpg 486w, https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-yes-480x302.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 486px, 100vw" /></p>
<p>Finally, there is the follow-up fallback, <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-fallback</span>. This kind of fallback works the same way as the <span style="font-family: 'Courier New', Courier, monospace;">DefaultFallbackIntent</span>, but will only be operational when a specific context is active (<span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-followup</span>), like with any follow-up intent.</p>
<p>This allows a Dialogflow agent to have contextual error messages for virtually any situation: simply create follow-up fallbacks for every point in your dialogue where the user could say something that is not supported by your agent (pro-tip: most of the time, it’s all the time). You can even implement <em>progressive</em> error handling with this, since follow-up fallback intents can be daisy-chained to ensure proper escalation of error prompts:</p>
<p><img decoding="async" class="wp-image-3945 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-fallback.jpg" alt="" width="575" height="429" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-fallback.jpg 575w, https://www.nuecho.com/wp-content/uploads/2019/06/DoTheThing-fallback-480x358.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 575px, 100vw" /></p>
<p>Here, if a user triggers <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing</span> and then says something that is not recognized, <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-fallback</span> will handle the situation. If the user then says something that is contextually correct (something that would match <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-yes</span> or <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-no</span>), the dialogue will continue normally. If they say another unsupported input, the agent will use the next follow-up fallback, <span style="font-family: 'Courier New', Courier, monospace;">DoTheThing-fallback-fallback</span>.</p>
<h2>The problem with slot filling error handling</h2>
<p>So all is good, right? After all, we just demonstrated that you can do anything you want with follow-ups and fallbacks. Well, <em>almost</em> anything. Most notably, this approach will not work with<a href="https://dialogflow.com/docs/concepts/slot-filling" target="_blank" rel="noopener"> slot filling</a>. But before we tell you why and what can be done about it, we need to take a closer look at how slot filling is handled in Dialogflow.</p>
<p>First of all, what is slot filling? It is a simple conversational pattern where a bot is given a list of information to obtain (or <em>slots</em> to <em>fill</em>). Usually, each of these slots will correspond to an entity that the chatbot will try to extract from the user’s input. When a slot is considered filled, the bot will simply move to the next slot, until they’re all filled.</p>
<p>In Dialogflow, slot filling is tied to a given intent. In other words, a user will first need to trigger a specific intent for the agent to start the slot filling. One important detail to note is that the agent will <em>stay</em> on the same intent for the duration of the slot filling.</p>
<p>Let’s see how it works with this image taken from Dialogflow’s documentation:</p>
<p><img decoding="async" class="wp-image-3947 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/Action-Parameters.jpg" alt="" width="713" height="440" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/Action-Parameters.jpg 713w, https://www.nuecho.com/wp-content/uploads/2019/06/Action-Parameters-480x296.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 713px, 100vw" /></p>
<p>The agent is given a list of three slots to fill : <span style="font-family: 'Courier New', Courier, monospace;">number-int, color </span>and <span style="font-family: 'Courier New', Courier, monospace;">size</span>, each of a different entity type. When the user triggers the intent that contains the slot filling, the agent will ask the user for each slot, in order. By default, the question is in the form “What is SLOT_NAME?”, but customized prompts can be defined for each slot.</p>
<p>What constitutes an error in slot filling is when the agent is not able to extract the expected entity from a user input. In this situation, by default, the Dialogflow agent will simply repeat the question until it receives a valid answer. Forever.</p>
<p>Error handling in slot filling is problematic because we can’t use the strategies that work in other circumstances (follow-up intents and fallbacks), since slot filling always happens <em>within</em> a single intent, and these strategies are built on top of the intent detection mechanism. It results in a very basic error handling for slot filling, and while it at least provides the possibility of continuing the dialogue, there is simply no error message guiding the user, which is not really interesting in terms of conversational UX.</p>
<p>Let’s try to fix that.</p>
<h2>Detecting slot filling error context</h2>
<p>(Please note that for the purpose of this article, we’ll suppose that the fulfillment is deployed in a serverless execution environment such as Google Cloud Function, which must remain stateless. Other approaches are of course possible.)</p>
<p>The first step to implement error handling in slot filling is to properly configure your agent to use your fulfillment webhook, and to activate calls to your fulfillment for the slot filling in your intent:</p>
<p><img decoding="async" class="wp-image-3949 alignnone size-full" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/06/Fulfillment.jpg" alt="" width="552" height="249" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/Fulfillment.jpg 552w, https://www.nuecho.com/wp-content/uploads/2019/06/Fulfillment-480x217.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 552px, 100vw" /></p>
<p>That was easy enough. The next step is to make your fulfillment code detect occurrences of errors in the slot-filling. The problem is that Dialogflow doesn’t communicate that directly. The only information that is clearly exposed in the fulfillment is the detected intent (which doesn’t change for the duration of the slot filling). But you can also extract from the contexts the current slot that the dialogue is trying to fill (more on that later). Even if there is no mention of any failed entity extraction, we already have everything we need to implement slot filling error detection.</p>
<p>The idea is to use contexts to leave a trace of every time slot filling is attempted for any given slot. Here are the main steps for slot filling error detection:</p>
<ol>
<li>Detect if you are in slot filling</li>
<li>If so, construct an error ID from the intent’s name and the current slot’s name</li>
<li>Check if a context with that name already exists:
<ol type="a">
<li>If it doesn’t, create it and continue as normal</li>
<li>If it does, you just detected an error</li>
</ol>
</li>
</ol>
<p>How does this work? Well, by making sure to create a custom context every time your agent tries to fill a slot, you leave evidence of that particular attempt. If the slot is properly filled, the agent will move to the next slot, and your fulfillment will generate another error ID (different from the one from before, since the slot is different). But if a call to your fulfillment is made with your custom context already created, it means that the same slot is trying to be filled again, which means that the user didn’t respond correctly to the slot query. In other words, you just detected an error in slot filling!</p>
<p>Let’s see how each step can be implemented:</p>
<p>One easy way to validate that you are indeed in slot filling is to use the <a href="https://dialogflow.com/docs/reference/fulfillment-library/webhook-client" target="_blank" rel="noopener">WebhookClient</a> and look in <span style="font-family: 'Courier New', Courier, monospace;">client.contexts</span> for a context (automatically generated by Dialogflow) whose name starts with <span style="font-family: 'Courier New', Courier, monospace;">`${INTENT}_dialog_params_`</span>, where <span style="font-family: 'Courier New', Courier, monospace;">INTENT</span> is the name of the current intent, in lower cases. If the context doesn’t exist, you are not in slot filling. If it does, you are, and you can extract the name of the current slot from the end of that context’s name (immediately after “dialog_params_”).</p>
<p>And now, to create the error ID: since we can find the name of the intent and the name of the current slot, we can use this information to forge a unique error ID for each slot in the dialogue. Doesn’t have to be fancy, something like <span style="font-family: 'Courier New', Courier, monospace;">`error_${INTENT}_${SLOT}`</span> will do the trick.</p>
<p>To check if our custom context already exists, we can use the following function from the WebhookClient:</p>
<p style="padding-left: 40px;"><span style="font-family: 'Courier New', Courier, monospace;">client.getContext(errorID);</span></p>
<p>That’s pretty much self explanatory. As for context creation:</p>
<p style="padding-left: 40px; font-family: 'Courier New', Courier, monospace;">client.setContext({<br />
name: `error_${INTENT}_${SLOT}`,<br />
lifespan: 1<br />
});</p>
<p>Here, we set the lifespan of the custom error context to 1, to make it disappear as soon as possible when it’s not needed anymore.</p>
<p>And that’s it for slot filling context error detection. From there, you already have most of the things you need to implement actual error handling: you just have to override the original bot response with a contextually appropriate message, which can be done with this simple command:</p>
<p style="padding-left: 40px;"><span style="font-family: 'Courier New', Courier, monospace;">client.add(message);</span></p>
<p>Of course, you now need a way to retrieve or generate these contextually correct messages, from within the fulfillment. In other words, you need to do that programmatically. A good way to implement that while keeping your code clean and language agnostic is to leverage the error ID you created as a key and use a localization library to consolidate all your error messages. From there, it’s a simple matter of calling the localization library with the error ID to have it resolved to a proper error message.</p>
<h2>Going progressive</h2>
<p>Now that we have implemented contextual error handling for the slot filling, let’s improve it by making it progressive. An easy way to do that is to add a <span style="font-family: 'Courier New', Courier, monospace;">count</span> parameter to your custom context:</p>
<p style="padding-left: 40px; font-family: 'Courier New', Courier, monospace;">client.setContext({<br />
name: `error_${INTENT}_${SLOT}`,<br />
lifespan: 1<br />
parameters: { count: 1 }<br />
});</p>
<p>From there, if an error occurs, you can append the <span style="font-family: 'Courier New', Courier, monospace;">count</span> to the error ID (<span style="font-family: 'Courier New', Courier, monospace;">`${errorID}_${COUNT}`</span>) and use that new ID template to identify your error messages in your localization library (or whatever similar strategy you use) and thus have any number of distinct error messages for any given context. Simply don’t forget to increment the count value in the context each time you detect an error by calling <span style="font-family: 'Courier New', Courier, monospace;">setContext</span> again. You may also want to implement a cap to that number, to avoid having a user reaching an error message that was not implemented.</p>
<h2>Next steps</h2>
<p>This concludes our short discussion on slot filling error handling in Dialogflow. Of course, the proposed implementation is really bare bone and doesn’t support really useful functionalities like constraints, disambiguation (“Did you mean A or B?”) or certain particular situations (things could be a little more complex if you decided to activate the <a href="https://cloud.google.com/dialogflow-enterprise/docs/knowledge-connectors" target="_blank" rel="noopener">Knowledge Base feature</a>, for example). These issues will most probably be covered in one of our future blog posts about Dialogflow. Thanks for reading!</p><p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-error-handling/">Dialogflow distilled: On error handling</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-error-handling/">Dialogflow distilled: On error handling</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Dialogflow Distilled: On Preemptive Slot-filling versus Branching</title>
		<link>https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=dialogflow-distilled-on-preemptive-slot-filling-versus-branching</link>
		
		<dc:creator><![CDATA[Pascal Deschênes]]></dc:creator>
		<pubDate>Thu, 02 May 2019 18:11:16 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Dialogflow]]></category>
		<category><![CDATA[IVA]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3005</guid>

					<description><![CDATA[<p>As we gain experience with Google Dialogflow, we like to take a step back and identify usage patterns to feed in our development practices. This blog post aims at depicting and distilling one of such patterns: preemptive slot-filling versus branching. Let’s say that one of your chatbot requirements is to perform a banking payment to [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/">Dialogflow Distilled: On Preemptive Slot-filling versus Branching</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/">Dialogflow Distilled: On Preemptive Slot-filling versus Branching</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As we gain experience with Google Dialogflow, we like to take a step back and identify usage patterns to feed in our development practices. This blog post aims at depicting and distilling one of such patterns: preemptive slot-filling versus branching.</p>
<p>Let’s say that one of your chatbot requirements is to perform a banking payment to a specific merchant. This is a fairly standard transaction involving a single intent capturing a few slots such as amount, account, merchant and a date.</p>
<p>&gt; <a href="https://medium.com/cxinnovations/dialogflow-distilled-on-preemptive-slot-filling-versus-branching-9b662eeed027" target="_blank" rel="noopener noreferrer">Read full version on Medium</a></p><p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/">Dialogflow Distilled: On Preemptive Slot-filling versus Branching</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/dialogflow-distilled-on-preemptive-slot-filling-versus-branching/">Dialogflow Distilled: On Preemptive Slot-filling versus Branching</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Dialogflow and Beyond</title>
		<link>https://www.nuecho.com/google-dialogflow-and-beyond/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=google-dialogflow-and-beyond</link>
		
		<dc:creator><![CDATA[Pascal Deschênes]]></dc:creator>
		<pubDate>Mon, 21 Jan 2019 18:48:27 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Dialogflow]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3256</guid>

					<description><![CDATA[<p>Nu Echo has been using Google Dialogflow for some time now and would like to take a moment to share our thoughts about using the platform for chatbot and intelligent virtual agent projects within enterprise organizations. In this blog post, we will describe what Dialogflow is good for, its current limitations, and how we work [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/google-dialogflow-and-beyond/">Dialogflow and Beyond</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/google-dialogflow-and-beyond/">Dialogflow and Beyond</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="graf graf--p graf--hasDropCapModel graf--hasDropCap"><span class="graf-dropCap">Nu</span> Echo has been using <a class="markup--anchor markup--p-anchor external" href="https://dialogflow.com/" target="_blank" rel="noopener noreferrer" data-href="https://dialogflow.com/">Google Dialogflow</a> for some time now and would like to take a moment to share our thoughts about using the platform for chatbot and intelligent virtual agent projects within enterprise organizations. In this blog post, we will describe what Dialogflow is good for, its current limitations, and how we work around those limitations to get the most out of our development. We will also explore our plans for the future as part of our IVA Solutions practice. This article will solely focus on an engineering perspective to kick off this series but please watch for future posts which will touch on dialogue management and conversational aspects to be considered for great IVA development and more!</p>
<h3 class="graf graf--h3">What makes it great!</h3>
<p class="graf graf--p graf--hasDropCapModel graf--hasDropCap"><span class="graf-dropCap">B</span>rowsing around Dialogflow’s website, you can easily find the benefits. Some are accurate, others are less evident as true benefits or are lacking as of yet. Let’s start by considering what some of the true benefits are that we have been able to experience and take advantage of:</p>
<p class="graf graf--p"><strong class="markup--strong markup--p-strong">Quick and easy to start building</strong>: Yes! You can truly create your first bot within a matter of minutes. I think my mother did one in, like, 16 minutes, while cooking her famous apple jelly. You login to the console, create an agent project, create one or two intents, and then try it out within the console. Instant reward. A few more clicks to activate the Phone Gateway integration and then you can, yes, call in (assuming your bot is en-US for now). Small dopamine rush.</p>
<p class="graf graf--p"><strong class="markup--strong markup--p-strong">Built on Google infrastructure</strong>: From within a project, you can tap into the rich Google Cloud ecosystem. Conversation logs get pushed over <a class="markup--anchor markup--p-anchor external" href="https://cloud.google.com/stackdriver/" target="_blank" rel="noopener noreferrer" data-href="https://cloud.google.com/stackdriver/">Stackdriver</a>, fulfillment is only a few <a class="markup--anchor markup--p-anchor external" href="https://cloud.google.com/functions/" target="_blank" rel="noopener noreferrer" data-href="https://cloud.google.com/functions/">Google Cloud Functions</a> or <a class="markup--anchor markup--p-anchor external" href="https://firebase.google.com/docs/functions/" target="_blank" rel="noopener noreferrer" data-href="https://firebase.google.com/docs/functions/">Firebase Functions</a> away, security and containment are directly handled as part of <a class="markup--anchor markup--p-anchor external" href="https://cloud.google.com/iam/" target="_blank" rel="noopener noreferrer" data-href="https://cloud.google.com/iam/">IAM</a>, while <a class="markup--anchor markup--p-anchor external" href="https://cloud.google.com/speech-to-text/" target="_blank" rel="noopener noreferrer" data-href="https://cloud.google.com/speech-to-text/">speech-to-text</a> and <a class="markup--anchor markup--p-anchor external" href="https://cloud.google.com/text-to-speech/" target="_blank" rel="noopener noreferrer" data-href="https://cloud.google.com/text-to-speech/">text-to-speech</a> relies on Google Cloud respective APIs. And frankly, although we are using some beta features, the platform is quite stable.</p>
<p class="graf graf--p"><strong class="markup--strong markup--p-strong">Easy to scale</strong>: Seriously! This is an additional benefit to the point above. No need to worry about any sort of viral effect your bot might experience. There’s not even a dial or knob to tweak or turn on. It’s essentially all done behind the scenes. Coupled with Google Cloud Functions for serverless fulfillment stage and you’re golden.</p>
<p class="graf graf--p"><strong class="markup--strong markup--p-strong">Strong natural language understanding (NLU) capabilities</strong>: So far so good on this front. We definitely have yet to push the limits but its context-based approach appears to be robust. More on that to come in future posts.</p>
<h3 class="graf graf--h3">Where it might fall a little short…</h3>
<p>&gt; <a class="external" href="https://medium.com/cxinnovations/dialogflow-and-beyond-67991b3dc87f" target="_blank" rel="noopener noreferrer">CLICK HERE</a> to read the full blog post</p><p>The post <a href="https://www.nuecho.com/google-dialogflow-and-beyond/">Dialogflow and Beyond</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/google-dialogflow-and-beyond/">Dialogflow and Beyond</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
