<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI Virtual Voice Experts with Google Dialogflow CX &#8211; CCAI &#8211; Nu Echo</title>
	<atom:link href="https://www.nuecho.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.nuecho.com/</link>
	<description>Nu Echo</description>
	<lastBuildDate>Wed, 14 Dec 2022 19:01:23 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Call automation doesn&#8217;t have to be risky, long and costly</title>
		<link>https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=call-automation-doesnt-have-to-be-risky-long-and-costly</link>
		
		<dc:creator><![CDATA[Pierre Moisan]]></dc:creator>
		<pubDate>Wed, 14 Dec 2022 16:23:53 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Industries]]></category>
		<category><![CDATA[IVR]]></category>
		<category><![CDATA[CAI]]></category>
		<category><![CDATA[Call automation]]></category>
		<category><![CDATA[CCaaS]]></category>
		<category><![CDATA[contact center automation]]></category>
		<category><![CDATA[contact center virtual agent]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[Conversational design]]></category>
		<category><![CDATA[CX]]></category>
		<category><![CDATA[use cases virtual agents]]></category>
		<category><![CDATA[VAaaS]]></category>
		<category><![CDATA[virtual agent]]></category>
		<guid isPermaLink="false">https://www.nuecho.com/?p=9565</guid>

					<description><![CDATA[<p>As explained in a previous post on “Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent. ”, call automation solutions can help customer contact centers address several challenges at once, such as variable calls volumes and workforce shortage. In recent years, virtual agents have benefited [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/">Call automation doesn’t have to be risky, long and costly</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/">Call automation doesn&#8217;t have to be risky, long and costly</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">As explained in a previous post on “</span><a href="https://www.nuecho.com/news-events/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/"><span style="font-weight: 400;">Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent. </span></a><span style="font-weight: 400;">”, call automation solutions can help customer contact centers address several challenges at once, such as variable calls volumes and workforce shortage.</span></p>
<p><span style="font-weight: 400;">In recent years, virtual agents have benefited from outstanding technology improvements in the field of Natural Language Processing (NLP) and Artificial Intelligence (AI). </span></p>
<p><span style="font-weight: 400;">However, the complexity and the amount of effort required to leverage conversational AI platforms such as Google or Amazon still prevents many businesses from seeing a move towards virtual agents as a profitable investment. That’s where managed virtual agent solutions save the day!</span></p>
<p><span style="font-weight: 400;">By leveraging the call volumes of several customers, managed virtual agent solution providers are able to offer on-demand virtual agents much faster and at a much lower cost than the implementation of an entire conversational AI platform.</span></p>
<p><span style="font-weight: 400;">Many businesses have developed their own set of criteria when it comes to selecting an outsourced workforce but these criteria may not entirely apply when it comes to virtual agents.</span></p>
<p><span style="font-weight: 400;"><strong>When choosing a managed virtual agent solution provider, businesses should consider these 4 factors</strong>:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Integration &amp; security </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Voice &amp; telephony experience</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Conversational design experience</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Continuously improving solution </span></li>
</ul>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">Let’s explore the details of each factor and what should be expected from providers.</span></p>
<p>&nbsp;</p>
<h2><b>Integration &amp; security</b></h2>
<p><span style="font-weight: 400;">From an integration perspective, a managed virtual agent solution is similar to outsourcing your calls which involves providing a way to transfer calls to another contact center and providing them access to the systems required for the selected  use cases. </span></p>
<p><span style="font-weight: 400;">The following figure provides a high-level architecture view of a managed virtual agent solution.</span></p>
<p><img decoding="async" class="aligncenter wp-image-9570 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/12/Virtual-Agent-As-a-Service-VAaaS-Waterfield-NuEcho-Architecture.png" alt="" width="1090" height="664" srcset="https://www.nuecho.com/wp-content/uploads/2022/12/Virtual-Agent-As-a-Service-VAaaS-Waterfield-NuEcho-Architecture.png 1090w, https://www.nuecho.com/wp-content/uploads/2022/12/Virtual-Agent-As-a-Service-VAaaS-Waterfield-NuEcho-Architecture-980x597.png 980w, https://www.nuecho.com/wp-content/uploads/2022/12/Virtual-Agent-As-a-Service-VAaaS-Waterfield-NuEcho-Architecture-480x292.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) 1090px, 100vw" /></p>
<p><span style="font-weight: 400;">Hopefully, your provider will be able to integrate with your existing systems without requiring that  you upgrade some of them, such as your contact center platform. You will also need to review the implications of exposing some access points to an external provider.</span></p>
<p><span style="font-weight: 400;">You will need to discuss how secure the overall integration will be as customer data and their voice interactions will be shared with this provider. You need to verify that your provider will maintain customer data (in transit and at rest) within the geographies you serve. For example, you might not want to have traffic go through the US if your business is in Canada. You also need to review the security posture of your provider. Security audits and certifications (ex. SOC 2, ISO 27001) can surely help reviewing and ensuring compliance with your requirements more quickly. </span></p>
<p>&nbsp;</p>
<h2><b>Voice &amp; telephony experience</b></h2>
<p><span style="font-weight: 400;">Providing a great conversational experience on the phone channel is more than just taking a chatbot and adding speech-to-text and text-to-speech. Voice conversations are real-time synchronous communications. There are multiple factors at play: keeping response time in milliseconds to avoid silence and awkwardness, detecting accurately the end of speech and interruptions (i.e. barge-in), handling low quality audio and noise, have great recognition accuracy even when faced with accent or people hesitating or changing their mind and being able to render natural responses.</span></p>
<p><span style="font-weight: 400;">When choosing a managed virtual agent provider, you will want to make sure that they have a lot of experience with the challenges of handling phone communications. </span></p>
<p>&nbsp;</p>
<h2><b>Conversational design experience</b></h2>
<p><span style="font-weight: 400;">As mentioned regarding low satisfaction scores for some IVRs, a one-size-fits-all mentality does not usually provide a good experience. Personalisation should be part of your conversation design strategy. For example, integrating your virtual agents with a CRM can help leverage customer data and context to better understand their needs and the reason why they might be contacting you. When working with a provider, you will want to ensure that they will not just give you a cookie-cutter solution that does not provide any personalization with the conversational experience.</span></p>
<p><span style="font-weight: 400;">Focusing on the user experience should aim at reducing friction with different interactions. Unfortunately, badly designed IVRs might be too restrictive and follow a rigid structure that customers do not appreciate. While you don’t want to trick your customers into believing they are talking to a human, your goal should be to  replicate as much as possible the experience of talking to a human. Letting people express themselves more naturally and capturing the required information as they speak freely is what happens in a human-to-human conversation. </span></p>
<p><span style="font-weight: 400;">You also need to consider your engagement channels with regards to the conversational design. While having an omnichannel solution is certainly desirable, businesses need to understand distinct constraints that relate to each channel. Let’s take for example an appointment booking use case with a virtual agent. If a customer wants to book an appointment on a given day where there is a lot of availability, then you could show all the times in a widget on a chat interface and let the customer review and select the proper time. But on the phone, this strategy will fail since the number of choices will be too great to just list them verbally. A voice user interface comes with several considerations that need to be incorporated in your design. Be wary of providers that tell you their virtual agents work with any channel. </span></p>
<p>&nbsp;</p>
<h2><b>Continuously improving solution</b></h2>
<p><span style="font-weight: 400;">People are unpredictable. For a virtual agent solution, you need to plan that people might not interact with a virtual agent as expected. In addition, the needs of your customers can evolve with time. Just like human agents, virtual agents need some form of monitoring, quality assurance and training. </span></p>
<p><span style="font-weight: 400;">Therefore, it is important to understand that a virtual agent solution is not only about implementing and launching a solution. It involves monitoring, supporting, maintaining and optimizing the solution to adapt to your customers. You will want to make sure that your provider will be a good partner in constantly improving your solution. </span></p>
<p><span style="font-weight: 400;">You will also want to see how your provider can leverage the usage data to provide insights into the voice of your customers. As people express themselves naturally, this is a great opportunity to identify how you can better serve them. This can also help identify other use cases that could be automated. </span></p>
<p>&nbsp;</p>
<h2><b>Benefits of a managed solution</b></h2>
<p><span style="font-weight: 400;">Leveraging the expertise of a partner provider of virtual agents has multiple benefits. </span></p>
<p><span style="font-weight: 400;">The most significant benefit is accelerating the time-to-value of the solution. A provider will already have possible integration connectors to your systems, have designed similar conversational agents or dialogs, have optimized difficult user inputs to recognize, … This can greatly reduce a virtual agent project time from several months to just weeks. </span></p>
<p><span style="font-weight: 400;">Defining, designing, implementing and maintaining virtual agents requires a cross-functional team and a good understanding of the latest conversational AI technologies. This expertise can greatly increase the total cost of the solution and increase the required investments. A fully managed virtual agent provider can reduce these investments and make costs more predictable. </span></p>
<p><span style="font-weight: 400;">Deploying a customer facing voice virtual assistant can be risky for businesses.  According to a </span><a href="https://info.rasa.com/conversational-ai-for-customer-experience-survey-report"><span style="font-weight: 400;">Rasa survey</span></a><span style="font-weight: 400;">, 41% respondents reported that limited experience building virtual assistants was a barrier to conversational AI adoption and only 18% of respondents using voice assistants are in production with it.  A fully managed virtual agent provider can leverage its experience to ensure the successful deployment of solutions in production. </span></p>
<p><span style="font-weight: 400;"> </span></p>
<h2><b>Summing up</b></h2>
<p><span style="font-weight: 400;">Automating calls through a fully managed virtual agent solution can help contact centers service their customers for simple and repetitive tasks in order to let their human agents focus on value added calls. It involves partnering with a provider and businesses should make sure that key criteria will be fulfilled by their provider. </span></p>
<p><span style="font-weight: 400;">Nu Echo has 20+ years of experience creating conversational experiences that improves operational efficiency with an exceptional customer experience.  If you are interested in a managed virtual agent solution, then </span><strong><a href="https://www.nuecho.com/company/contact-us/">contact us today</a>. </strong></p>
<p>&nbsp;</p><p>The post <a href="https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/">Call automation doesn’t have to be risky, long and costly</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/">Call automation doesn&#8217;t have to be risky, long and costly</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent.</title>
		<link>https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent</link>
		
		<dc:creator><![CDATA[Pierre Moisan]]></dc:creator>
		<pubDate>Wed, 23 Nov 2022 14:36:52 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Industries]]></category>
		<category><![CDATA[IVR]]></category>
		<category><![CDATA[contact center automation]]></category>
		<category><![CDATA[contact center virtual agent]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[Conversational design]]></category>
		<category><![CDATA[CX]]></category>
		<category><![CDATA[use cases virtual agents]]></category>
		<category><![CDATA[virtual agent]]></category>
		<guid isPermaLink="false">https://www.nuecho.com/?p=9547</guid>

					<description><![CDATA[<p>Getting familiar with waiting times of over an hour before you can talk to an agent? Has it gotten worse with the pandemic? What is going on with call centers?  Imagine you booked tickets to an upcoming event that you are really excited about. A few weeks later, you get an email saying that your [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/">Ladies and gentlemen, we’re experiencing some turbulence. Please hold the line while we try to find an available agent.</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/">Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent.</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><i><span style="font-weight: 400;">Getting familiar with waiting times of over an hour before you can talk to an agent? Has it gotten worse with the pandemic? What is going on with call centers? </span></i></p>
<p><span style="font-weight: 400;">Imagine you booked tickets to an upcoming event that you are really excited about. A few weeks later, you get an email saying that your event has been canceled. You try to understand your options: can I get a refund, is the event postponed, what seating will I get if I reschedule, … You browse through the Web site and cannot find answers to your questions. So you decide to call the event’s customer service and then you have to wait for almost an hour until you can finally talk to an agent. </span></p>
<p><span style="font-weight: 400;">This may sound like a familiar story. </span>Through this blogpost, I will illustrate why with a few counterexamples.</p>
<p><span style="font-weight: 400;"> These bad experiences are mostly due to 4 main trends currently affecting contact centers.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">⚠️High volumes on the phone channel</span></h2>
<p><span style="font-weight: 400;">According to </span><a href="https://cdncom.cfigroup.com/wp-content/uploads/CFI-contact-center-satisfaction-2020.pdf"><span style="font-weight: 400;">CFI group</span></a><span style="font-weight: 400;">, 76% of people reaching out to customer service choose to place a phone call.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">⚠️Unpredictable call volumes</span></h2>
<p><span style="font-weight: 400;"> According to </span><a href="https://www.talkdesk.com/blog/contact-center-holiday-season/"><span style="font-weight: 400;">Talkdesk</span></a><span style="font-weight: 400;">, 50% of retail CX professionals say the top challenge they face is high variability in the amount of customer support needed during holidays, seasonal spikes, off-season dips and others. People are also adapting to new situations such as work-from-home, virtual interactions and changing travel rules.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">⚠️Rising call complexity</span></h2>
<p><span style="font-weight: 400;">According to the </span><a href="https://hbr.org/2020/04/supporting-customer-service-through-the-coronavirus-crisis"><span style="font-weight: 400;">Harvard Business Review</span></a><span style="font-weight: 400;"> which studied the effect of the COVID pandemic on customer service, the percentage of calls scored as “difficult” more than doubled, hold times increased by as much as 34 percent and escalations by 68 percent.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">⚠️Increasing contact center workforce costs &amp; complexity</span></h2>
<p><span style="font-weight: 400;">According to </span><a href="https://www.cgsinc.com/en/resources/infographic-ongoing-impact-covid-19-contact-center-support-services"><span style="font-weight: 400;">CGS</span></a><span style="font-weight: 400;">, 37% of companies are not confident or only somewhat confident in their ability to maintain service levels and prevent negative effects to service levels from additional waves of COVID. </span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">🌀A perfect storm for contact centers</span></h2>
<p><span style="font-weight: 400;">The combination of these trends is putting a tremendous strain on contact centers. Managing a contact center workforce has become increasingly difficult with more unpredictable call volumes combined with hiring and staffing difficulties. </span></p>
<p><span style="font-weight: 400;">Not being able to speak with a business can lead to lower customer satisfaction and engagement which can lead to loss of revenue and lower brand value. </span></p>
<p><span style="font-weight: 400;">How can contact centers keep up? Hopefully, businesses can use different strategies to mitigate the impact of these challenges. To help choose the right strategies, it is important to consider the complexity as well as the volume of each use case or categories of calls. Call automation is becoming a key element for contact centers to let virtual agents handle simple, transactional calls and leave more complex, added-value calls to human agents. </span></p>
<p><span style="font-weight: 400;">We will learn more about how contact centers can partner with providers to automate calls with reduced investments, more predictable costs and lower time to value in our soon-to-be-published article </span><em><span style="font-weight: 400;">We are currently experiencing higher than normal call volumes</span><span style="font-weight: 400;">. </span></em></p><p>The post <a href="https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/">Ladies and gentlemen, we’re experiencing some turbulence. Please hold the line while we try to find an available agent.</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/">Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent.</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Voice agents VS. Chatbots: Where does the difference lie?</title>
		<link>https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=voice-agents-vs-chatbots-where-does-the-difference-lie</link>
		
		<dc:creator><![CDATA[Karine Dery]]></dc:creator>
		<pubDate>Wed, 14 Sep 2022 16:27:02 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Dialogflow]]></category>
		<category><![CDATA[Chatbot]]></category>
		<category><![CDATA[contact center automation]]></category>
		<category><![CDATA[contact center virtual agent]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[Conversational design]]></category>
		<category><![CDATA[DialogFlow]]></category>
		<category><![CDATA[NLP model]]></category>
		<category><![CDATA[NLU model]]></category>
		<category><![CDATA[use cases virtual agents]]></category>
		<category><![CDATA[virtual agent]]></category>
		<category><![CDATA[Voicebot]]></category>
		<category><![CDATA[voicebot persona]]></category>
		<guid isPermaLink="false">https://www.nuecho.com/?p=9487</guid>

					<description><![CDATA[<p>In our field of work, we often hear “Once we’re done with the voice assistant, we’ll just use the dialog to add a chatbot on our website!” or “now that our chatbot is done, it will be a piece of cake to make a voice bot”. Seemingly, it looks like we would only need to [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/">Voice agents VS. Chatbots: Where does the difference lie?</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/">Voice agents VS. Chatbots: Where does the difference lie?</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">In our field of work, we often hear “Once we’re done with the voice assistant, we’ll just use the dialog to add a chatbot on our website!” or “now that our chatbot is done, it will be a piece of cake to make a voice bot”. Seemingly, it looks like we would only need to add or remove the speech processing (</span><i><span style="font-weight: 400;">speech-to-text</span></i><span style="font-weight: 400;">, STT) and speech synthesis (</span><i><span style="font-weight: 400;">text-to-speech</span></i><span style="font-weight: 400;">, TTS) layers to magically transform a chatbot into a voicebot and vice versa (by the wave of a magic wand).</span></p>
<p><span style="font-weight: 400;">Based on our experience, we would also describe such a simple transformation as magic!</span></p>
<p><span style="font-weight: 400;">Through this blogpost, I will illustrate why with a few counterexamples.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">Generating the output</span></h2>
<h3><span style="font-weight: 400;">Presenting complex information</span></h3>
<p><span style="font-weight: 400;">Within a chatbot, text information can be enriched with images, hyperlinks, slideshows, etc. Some use cases such as navigation assistance or purchase recommendations would seem impossible to implement without those tools.</span></p>
<p><span style="font-weight: 400;">In other cases, several voice interactions would be required to reach the same result as a single visual output. For example, here is my best shot at transforming the output of a appointment scheduling chatbot for a voicebot: </span></p>
<p><span style="font-weight: 400;"><img decoding="async" class="aligncenter wp-image-9488 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/rdv-c-en.png" alt="" width="358" height="522" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/rdv-c-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/rdv-c-en-206x300.png 206w" sizes="(max-width: 358px) 100vw, 358px" /></span></p>
<p><img decoding="async" class="aligncenter wp-image-9490 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/rdv-v-en.png" alt="" width="358" height="393" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/rdv-v-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/rdv-v-en-273x300.png 273w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h3></h3>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Trail of previous interactions</span></h3>
<p><span style="font-weight: 400;">What does a chatbot do if the user is not paying attention, has poor memory, or has forgotten to put on their glasses? Nothing! The output remains there for the user to re-read as they see fit, which makes certain cases that are absolutely necessary in a verbal interaction become completely useless in a written conversation:</span></p>
<p><img decoding="async" class="aligncenter wp-image-9492 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/repeter-en.png" alt="" width="358" height="397" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/repeter-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/repeter-en-271x300.png 271w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Persona and voice features</span></h3>
<p><span style="font-weight: 400;">The persona (demographics, language level, personality) of the virtual agent, as well as its consistency, are important in both modes. While in text mode you have to think about the visual output of the chatbot, in voice mode, you have to look for a voice that represents the desired characteristics, while being natural, and this can limit our options. For example, trying to create an informal voice agent can be near impossible, especially when using TTS instead of a recorded voice (which also has its limitations).</span></p>
<audio class="wp-audio-shortcode" id="audio-9487-1" preload="none" style="width: 100%;" controls="controls"><source type="audio/wav" src="https://www.nuecho.com/wp-content/uploads/2022/09/voicebot_cool-en.wav?_=1" /><a href="https://www.nuecho.com/wp-content/uploads/2022/09/voicebot_cool-en.wav">https://www.nuecho.com/wp-content/uploads/2022/09/voicebot_cool-en.wav</a></audio>
<p>&nbsp;</p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Support of multiple channels</span></h3>
<p><span style="font-weight: 400;">Finally, even if our use cases are channel-agnostic, our personna very simple and our agent very talkative, it is clear that we must at least be able to play different messages depending on the channel so that SSML is included in audio messages. Unfortunately, some dialog engines hardly support multiple channels and this can greatly increase the challenges of implementing a common agent for both voice and text.</span></p>
<p><img decoding="async" class="aligncenter wp-image-9496 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/ssml-en.png" alt="" width="358" height="364" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/ssml-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/ssml-en-295x300.png 295w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h2></h2>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">Input interpretation</span></h2>
<p><span style="font-weight: 400;">“What about the other way around? The user won’t send images or carousels of images to the chatbot. For sure, interpreting the input can’t be that different.” I will answer with a dramatic example. Let’s look at Bob who is trying to express what he needs to a vocal agent:</span></p>
<p><img decoding="async" class="aligncenter wp-image-9498 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/bob-en.png" alt="" width="677" height="764" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/bob-en.png 677w, https://www.nuecho.com/wp-content/uploads/2022/09/bob-en-480x542.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 677px, 100vw" /></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">Of course, Bob and his legendary bad luck are not real, but the cases I have presented are taken from real-life examples. Even though some STT models can now ignore “mhms”, noises and secondary voices, the transcription will still have its share of errors.</span></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Uncertainty</span></h3>
<p><span style="font-weight: 400;">There are ways to reduce these errors or their impacts, whether it’s through the configuration of the engine, systematic changes of the transcription, or the adaptation of the NLP model to the sentences received. There remains, however, an additional uncertainty related to the STT which must be taken into account in the development of a voice application.</span></p>
<p>&nbsp;</p>
<h4><span style="font-weight: 400;">Strategies for dealing with uncertainty</span></h4>
<p><span style="font-weight: 400;">To increase our confidence in the interpretation of the input, we will use more strategies for dealing with uncertainty in the dialogue of a vocal agent than in the dialogue of a textual agent. </span></p>
<p><span style="font-weight: 400;">For example, we can think of:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Add a step to explicitly or implicity confirm an intent or an entity</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Add a step to disambiguate the input when intentions are too similar</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Enable changes or fixes</span></li>
</ul>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter wp-image-9501 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en.png" alt="" width="358" height="393" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en-273x300.png 273w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h4></h4>
<p>&nbsp;</p>
<h4><span style="font-weight: 400;">Choosing use cases</span></h4>
<p><span style="font-weight: 400;">Addresses, emails or people’s names are difficult pieces of information to transcribe correctly for many reasons, but they present lesser challenges in writing. If some of these pieces of information are critical for a use case, it could be very complex, risky, or inappropriate for the user experience to implement it though a vocal agent..</span></p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter wp-image-9503 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/09/courriel-en.png" alt="" width="358" height="358" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/courriel-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/courriel-en-300x300.png 300w, https://www.nuecho.com/wp-content/uploads/2022/09/courriel-en-150x150.png 150w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h2></h2>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">Real-time management</span></h2>
<p><span style="font-weight: 400;">The last big difference between voice and text conversations is time management. A text conversation is asynchronous: the input is received in one block, and the response that follows is sent in one block. The audio, on the other hand, is transmitted continuously, so the time must be managed accordingly.</span></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Short response time and user experience</span></h3>
<p><span style="font-weight: 400;">In a vocal conversation, we are expecting a response within a few tenths of a second, while in text mode, it is completely normal to wait for much longer. Long silences on the phone are uncomfortable, and even if it is possible to play sounds or music-on-hold, between two interactions, the “&#8230;” hint cannot be replaced. It is therefore much more critical to ensure that the system is fast and to warn the user in case of a longer operation in voice mode.</span></p>
<h3></h3>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Interruptions</span></h3>
<p><span style="font-weight: 400;">Because voice output has a duration, the user can try to interrupt a voice agent. Supporting interruption correctly involves additional technical complexity, but also has additional impact on the dialogue. For example, we want to make the assumption that if the user says “yes” when presenting several options, this means that he chooses the first one, and we will support this case.</span></p>
<p>&nbsp;</p>
<p><img decoding="async" class="wp-image-9505 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en-1.png" alt="" width="358" height="393" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en-1.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/confirm-en-1-273x300.png 273w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">User Silence</span></h3>
<p><span style="font-weight: 400;">Although a virtual agent isn’t discomforted by silences, the treatment of what is commonly called a </span><i><span style="font-weight: 400;">no-input</span></i><span style="font-weight: 400;"> differs greatly depending on the mode of communication. In a voice conversation, a few seconds of silence usually means the user is hesitating or their voice is too low; an appropriate help message will therefore be played.</span></p>
<p><span style="font-weight: 400;">In text mode, it is useless to harass the user with error messages because the absence of input is treated like any inaction on a website: after a determined time, the user will be disconnected if necessary, and the conversation is ended.</span></p>
<p>&nbsp;</p>
<p><img decoding="async" class="size-full wp-image-9507 aligncenter" src="https://www.nuecho.com/wp-content/uploads/2022/09/no-input-en.png" alt="" width="358" height="377" srcset="https://www.nuecho.com/wp-content/uploads/2022/09/no-input-en.png 358w, https://www.nuecho.com/wp-content/uploads/2022/09/no-input-en-285x300.png 285w" sizes="(max-width: 358px) 100vw, 358px" /></p>
<h2></h2>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">So, finally…</span></h2>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">How then does one answer the question: “What can be reused from a voice agent to create a chatbot or vice versa?” The answer is very nuanced and a little disappointing. Switching from a voice agent to a chatbot will generally allow more reuse because the former is generally more restrictive: perhaps it will be enough to adapt the messages a little, to add or remove a few dialogue paths.</span></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">However, in both cases, it is important to take a step back and re-evaluate our use cases and our persona: are they appropriate, feasible and realistic on this new channel? For what comes out of this questioning, business rules and high-level flows of the dialogue can probably be reused. The NLU model (textual data, organization of intentions and entities) and the messages of one may serve as a basis for the other, but will be subject to change. Indeed, the approach will have to be adapted to the results of user tests and data collection, so that the user experience does not suffer in favor of the simplicity of development.</span></p>
<p>&nbsp;</p>
<p>&nbsp;</p><p>The post <a href="https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/">Voice agents VS. Chatbots: Where does the difference lie?</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/voice-agents-vs-chatbots-where-does-the-difference-lie/">Voice agents VS. Chatbots: Where does the difference lie?</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		<enclosure url="https://www.nuecho.com/wp-content/uploads/2022/09/voicebot_cool-en.wav" length="0" type="audio/wav" />

			</item>
		<item>
		<title>Exploring Approaches for a Question Answering System</title>
		<link>https://www.nuecho.com/exploring-approaches-for-a-question-answering-system/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=exploring-approaches-for-a-question-answering-system</link>
		
		<dc:creator><![CDATA[Laurence Dupont]]></dc:creator>
		<pubDate>Tue, 07 Jun 2022 16:16:47 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://www.nuecho.com/?p=9458</guid>

					<description><![CDATA[<p>Problem Definition The scientific literature presents several ways to approach the problem, but we were more specifically interested in the answer selection task. This task aims to predict the correct answer among a set of candidate answers. It assumes that there is always a correct answer for each question. However, in a real Q&#38;A system, [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/exploring-approaches-for-a-question-answering-system/">Exploring Approaches for a Question Answering System</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/exploring-approaches-for-a-question-answering-system/">Exploring Approaches for a Question Answering System</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2><span style="font-weight: 400;">Problem Definition</span></h2>
<p><span style="font-weight: 400;">The scientific literature presents several ways to approach the problem, but we were more specifically interested in the answer selection task. This task aims to predict the correct answer among a set of candidate answers.</span></p>
<p><img decoding="async" class="size-full wp-image-9463 aligncenter" src="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Short-Version-1.png" alt="" width="645" height="131" srcset="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Short-Version-1.png 645w, https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Short-Version-1-480x97.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 645px, 100vw" /><br />
<span style="font-weight: 400;">It assumes that there is always a correct answer for each question. However, in a real Q&amp;A system, sometimes we do not want to provide an answer, for example if a user asks an out-of-domain question. The answer triggering task offers this possibility.</span></p>
<p><img decoding="async" class="size-full wp-image-9465 aligncenter" src="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Short-Version-2.png" alt="" width="601" height="174" srcset="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Short-Version-2.png 601w, https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Short-Version-2-480x139.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 601px, 100vw" /></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">System Definition</span></h2>
<p><span style="font-weight: 400;">To accomplish the answer triggering task for a given question, the chosen implementation performs two subtasks:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">A machine learning model (a classifier) ​​accepts as input a vector representation of the question and returns the probabilities by class. Each class is associated with a question-answer pair.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The highest probability is compared to a threshold to determine whether the answer will be returned or not.</span></li>
</ol>
<p><img decoding="async" class="size-full wp-image-9467 aligncenter" src="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Short-Version-3.png" alt="" width="733" height="223" srcset="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Short-Version-3.png 733w, https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Short-Version-3-480x146.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 733px, 100vw" /><br />
<span style="font-weight: 400;">Experiments were then carried out to find the best combination of vectorization model and classifier to perform the first subtask.</span></p>
<h2><span style="font-weight: 400;">Experiments</span></h2>
<p><span style="font-weight: 400;">For the experiments, the banking dataset </span><a href="https://github.com/PolyAI-LDN/task-specific-datasets"><span style="font-weight: 400;">BANKING77</span></a><span style="font-weight: 400;">, created by the conversational solutions company PolyAI, was used. The vectorization model and classifier combinations were evaluated on the test set with the accuracy metric, which calculates the percentage of correct predictions.</span></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Vectorization Models</span></h3>
<p><span style="font-weight: 400;">Among the different vectorization models, the one that performed the best is Google&#8217;s Universal Sentence Encoder (USE). It is a neural network pretrained simultaneously on several semantic tasks which accepts a text as input and outputs a sentence embedding (a vector representation of the sentence). The pretraining is done on large text corpora like Wikipedia, which allows it to capture the semantic similarity of sentences never seen before, as shown in the example below.</span></p>
<p><img decoding="async" class="alignnone size-full wp-image-9461 aligncenter" src="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Semantic-similarity.png" alt="" width="509" height="401" srcset="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Semantic-similarity.png 509w, https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-Semantic-similarity-480x378.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 509px, 100vw" /></p>
<p style="text-align: center;"><i><span style="font-weight: 400;">Semantic similarity of sentences taken from BANKING77 with USE. </span></i><a href="https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder"><i><span style="font-weight: 400;">Reference</span></i></a></p>
<p>&nbsp;</p>
<h3><span style="font-weight: 400;">Classifiers</span></h3>
<p><span style="font-weight: 400;">The evaluated classifiers include a k-nearest neighbor classifier (KNN) and a neural network. The main advantage of the KNN over the neural network is that it does not need to be trained, it just memorizes the training data. Adding new questions to the model therefore does not require retraining. Another advantage is that its predictions are interpretable. To predict the class of a test example, the KNN finds its k-nearest neighbors and returns the majority class. The number of neighbors as well as the distance function are configurable hyperparameters.</span></p>
<p><span style="font-weight: 400;">To illustrate how a KNN works, a simplified example is provided below for a binary classification problem with 2-dimensional data.</span></p>
<p><img decoding="async" class="size-full wp-image-9459 aligncenter" src="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-graphs.png" alt="" width="582" height="227" srcset="https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-graphs.png 582w, https://www.nuecho.com/wp-content/uploads/2022/06/Blog-EN-Exploring-Approaches-for-a-Question-Answering-System-graphs-480x187.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 582px, 100vw" /></p>
<p style="text-align: center;"><i><span style="font-weight: 400;">Example for a KNN with k=3 and Euclidean distance. For the test example in gray, we will<br />
predict the class in blue (majority class among the 3 nearest neighbors).</span></i></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">First, experiments were performed with a KNN with </span><a href="https://en.wikipedia.org/wiki/Cosine_similarity"><span style="font-weight: 400;">cosine distance</span></a><span style="font-weight: 400;">. Afterwards, other experiments were carried out with a learned distance function (</span><a href="http://contrib.scikit-learn.org/metric-learn/introduction.html"><span style="font-weight: 400;">metric learning</span></a><span style="font-weight: 400;">) in order to improve performance. The purpose of the algorithm was to bring together the examples of the same class and to distance the examples belonging to different classes. In both cases, the accuracy obtained with the neural network was superior, which led to the latter being chosen as the classifier.</span></p>
<h2><span style="font-weight: 400;">System Evaluation</span></h2>
<p><span style="font-weight: 400;">The experiments described above have established that the best model was the one combining the Universal Sentence Encoder with an MLP. To evaluate this model, a comparison was performed on the answer triggering task with the NLU intention classification models of Dialogflow ES and Rasa. To do this, a new “out_of_scope” intent containing </span><a href="https://github.com/ycemsubakan/covid_chatbot_data"><span style="font-weight: 400;">questions about COVID-19</span></a><span style="font-weight: 400;"> has been added to the BANKING77 test set. For all models, rejecting out-of-domain examples proved more difficult than properly classifying in-domain examples. However, overall, it was the USE model combined with a neural network that stood out. This evaluation has therefore demonstrated that this model can be used to develop an effective and efficient Q&amp;A system.</span></p>
<p><span style="font-weight: 400;">For more details on the models used, the methodology followed and the results of the experiments, we invite you to consult <a href="https://www.nuecho.com/wp-content/uploads/2022/06/White-Paper-EN-Exploring-Approaches-for-a-Question-Answering-System.pdf" target="_blank" rel="noopener">this article</a></span><span style="font-weight: 400;">.</span></p><p>The post <a href="https://www.nuecho.com/exploring-approaches-for-a-question-answering-system/">Exploring Approaches for a Question Answering System</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/exploring-approaches-for-a-question-answering-system/">Exploring Approaches for a Question Answering System</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>There is a new IVR in town. Here’s what it means</title>
		<link>https://www.nuecho.com/there-is-a-new-ivr-in-town-heres-what-it-means/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=there-is-a-new-ivr-in-town-heres-what-it-means</link>
		
		<dc:creator><![CDATA[Yves Normandin]]></dc:creator>
		<pubDate>Wed, 05 May 2021 14:07:25 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=8454</guid>

					<description><![CDATA[<p>And that’s nothing new, really. We used to call that “speech recognition IVR” and we&#8217;ve been delivering these conversational experiences for 20 years. What is new is that there are now novel technologies and platforms that promise to make it much faster and easier to create these conversational experiences while greatly expanding the range of [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/there-is-a-new-ivr-in-town-heres-what-it-means/">There is a new IVR in town. Here’s what it means</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/there-is-a-new-ivr-in-town-heres-what-it-means/">There is a new IVR in town. Here’s what it means</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">And that’s nothing new, really. We used to call that “speech recognition IVR” and we&#8217;ve been delivering these conversational experiences for 20 years.</span></p>
<p><span style="font-weight: 400;">What </span><i><span style="font-weight: 400;">is</span></i><span style="font-weight: 400;"> new is that there are now novel technologies and platforms that promise to make it much faster and easier to create these conversational experiences while greatly expanding the range of tasks that virtual voice agents (that’s what we call them) can handle.</span></p>
<p><span style="font-weight: 400;">These novel technologies initially emerged in the context of voice assistants (Siri, Amazon Echo, Google Home) and are in the process of fundamentally changing the way IVR solutions are developed.</span></p>
<p><span style="font-weight: 400;">To understand how, let’s compare the “Traditional speech recognition IVR” with the “New IVR”.</span></p>
<table class="MsoNormalTable" style="border-collapse: collapse; border: none; mso-border-alt: solid black 1.0pt; mso-yfti-tbllook: 1184; mso-border-insideh: 1.0pt solid black; mso-border-insidev: 1.0pt solid black;" border="1" cellspacing="0" cellpadding="0">
<tbody>
<tr style="mso-yfti-irow: 0; mso-yfti-firstrow: yes;">
<td style="border: solid black 1.0pt; background: #EFEFEF; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><b><br />
<span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">Technology</span></b></p>
</td>
<td style="border: solid black 1.0pt; border-left: none; mso-border-left-alt: solid black 1.0pt; background: #EFEFEF; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><b><br />
<span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">Traditional speech IVR</span></b></p>
</td>
<td style="border: solid black 1.0pt; border-left: none; mso-border-left-alt: solid black 1.0pt; background: #EFEFEF; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><b><br />
<span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">New IVR</span></b></p>
</td>
</tr>
<tr style="mso-yfti-irow: 1;">
<td style="border: solid black 1.0pt; border-top: none; mso-border-top-alt: solid black 1.0pt; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">Speech recognition</span></p>
</td>
<td style="border-top: none; border-left: none; border-bottom: solid black 1.0pt; border-right: solid black 1.0pt; mso-border-top-alt: solid black 1.0pt; mso-border-left-alt: solid black 1.0pt; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><span lang="EN-CA" style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-ansi-language: EN-CA; mso-fareast-language: FR-CA;">Grammars and statistical language models</span></p>
</td>
<td style="border-top: none; border-left: none; border-bottom: solid black 1.0pt; border-right: solid black 1.0pt; mso-border-top-alt: solid black 1.0pt; mso-border-left-alt: solid black 1.0pt; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">Speech-to-text</span></p>
</td>
</tr>
<tr style="mso-yfti-irow: 2;">
<td style="border: solid black 1.0pt; border-top: none; mso-border-top-alt: solid black 1.0pt; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">Natural language understanding (NLU)</span></p>
</td>
<td style="border-top: none; border-left: none; border-bottom: solid black 1.0pt; border-right: solid black 1.0pt; mso-border-top-alt: solid black 1.0pt; mso-border-left-alt: solid black 1.0pt; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">Grammars and simple classifiers</span></p>
</td>
<td style="border-top: none; border-left: none; border-bottom: solid black 1.0pt; border-right: solid black 1.0pt; mso-border-top-alt: solid black 1.0pt; mso-border-left-alt: solid black 1.0pt; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">Deep learning NLP</span></p>
</td>
</tr>
<tr style="mso-yfti-irow: 3; mso-yfti-lastrow: yes;">
<td style="border: solid black 1.0pt; border-top: none; mso-border-top-alt: solid black 1.0pt; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">Speech generation</span></p>
</td>
<td style="border-top: none; border-left: none; border-bottom: solid black 1.0pt; border-right: solid black 1.0pt; mso-border-top-alt: solid black 1.0pt; mso-border-left-alt: solid black 1.0pt; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><span lang="EN-CA" style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-ansi-language: EN-CA; mso-fareast-language: FR-CA;">Prompt concatenation + some text-to-speech</span></p>
</td>
<td style="border-top: none; border-left: none; border-bottom: solid black 1.0pt; border-right: solid black 1.0pt; mso-border-top-alt: solid black 1.0pt; mso-border-left-alt: solid black 1.0pt; padding: 5.0pt 5.0pt 5.0pt 5.0pt;" valign="top">
<p class="MsoNormal" style="margin-bottom: 6.0pt; line-height: normal;"><span style="font-family: 'Arial',sans-serif; mso-fareast-font-family: 'Times New Roman'; color: black; mso-fareast-language: FR-CA;">Mostly text-to-speech</span></p>
</td>
</tr>
</tbody>
</table>
<p><span style="font-weight: 400;">Let’s review the above in greater detail.</span></p>
<h2><span style="font-weight: 400;">Traditional IVR speech recognition</span></h2>
<p><span style="font-weight: 400;">The speech recognition engines traditionally used in speech IVR (e.g., </span><a href="https://www.nuance.com/omni-channel-customer-engagement/voice-and-ivr/automatic-speech-recognition/nuance-recognizer.html" target="_blank" rel="noopener"><span style="font-weight: 400;">Nuance Recognizer</span></a><span style="font-weight: 400;">) can’t recognize speech “out-of-the-box”. To recognize speech, they need a speech recognition grammar. There are two main types of grammars:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><b>SRGS grammars</b><span style="font-weight: 400;"> are </span><a href="https://www.w3.org/TR/speech-grammar/" target="_blank" rel="noopener"><span style="font-weight: 400;">defined by a set of rules</span></a><span style="font-weight: 400;">, hand-crafted by a grammar developer, which provide a formal definition of the language that can be recognized by the engine. The language defined by SRGS grammars is rigid and only the sentences included in that language can be recognized by the engine. This makes them well suited for directed dialogues, which tend to have a predictable range of user utterances.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Statistical language models (SLMs)</b><span style="font-weight: 400;"> are defined by </span><a href="https://en.wikipedia.org/wiki/N-gram" target="_blank" rel="noopener"><span style="font-weight: 400;">N-grams</span></a><span style="font-weight: 400;">, which are probabilities of a word given the previous words in the sentence and these probabilities are learned from sample sentences. SLMs provide a much less rigid language model than SRGS grammars so they are better suited for handling spontaneous natural language responses to open-ended prompts (e.g., “How may I help you?”). To perform well, SLMs need a sufficiently large and representative corpus of sentences with which to train the model.</span></li>
</ol>
<p><span style="font-weight: 400;">Developing a traditional speech IVR application typically requires creating a separate grammar for each step in the dialogue. Moreover, to achieve sufficient recognition accuracy, these grammars need to be extensively tuned based on real user utterances collected by the application in production.</span></p>
<p><span style="font-weight: 400;">Developing and tuning these grammars are time consuming tasks that require highly skilled speech scientists. If done well, this can produce very high accuracy and good user experiences. Unfortunately, the tendency to cut corners there is very high, which inevitably results in poor performance and user experience. This is one of the main reasons why speech recognition IVR tends to have such a bad reputation.</span></p>
<h2><span style="font-weight: 400;">Speech-to-text (STT)</span></h2>
<p><span style="font-weight: 400;">In the past several years, we have witnessed breakthrough improvements in speech recognition technology thanks to deep learning. This has made it possible to train speech-to-text (STT) engines that produce high accuracy speech transcription with almost unlimited vocabularies. Nowadays, STT engines are available from a wide range of vendors (e.g., </span><a href="https://cloud.google.com/speech-to-text" target="_blank" rel="noopener"><span style="font-weight: 400;">Google STT</span></a><span style="font-weight: 400;">, </span><a href="https://docs.mix.nuance.com/asr-grpc/v1/#asr-as-a-service-grpc-api" target="_blank" rel="noopener"><span style="font-weight: 400;">Nuance Krypton</span></a><span style="font-weight: 400;">, </span><a href="https://aws.amazon.com/transcribe/" target="_blank" rel="noopener"><span style="font-weight: 400;">Amazon Transcribe</span></a><span style="font-weight: 400;">, </span><a href="https://deepgram.com/" target="_blank" rel="noopener"><span style="font-weight: 400;">Deepgram</span></a><span style="font-weight: 400;">, etc.) and there are even </span><a href="https://fosspost.org/open-source-speech-recognition/" target="_blank" rel="noopener"><span style="font-weight: 400;">open-source versions available</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">With STT engines, there is no need to develop grammars at all, so this is a huge time saver when creating conversational IVR applications. This is not to say that speech recognition is a solved problem, far from it. Accuracy remains very much an issue. In fact, we can often achieve significantly better accuracy with well-tuned grammars than with even the best STT engines.</span></p>
<p><span style="font-weight: 400;">At the moment, the main issues with STT engines are:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><b>Training data</b><span style="font-weight: 400;">. As with any model based on machine learning, the STT model’s performance will be best if its training data is representative of conditions in which it is used. So for instance, if a model was mostly trained on recordings from home speakers primarily involving topics like music playing, weather information, alarms setting and general trivia questions, it may not be optimal for a banking IVR application. Having the ability to fine-tune a STT model on domain-specific data could clearly make a huge difference in accuracy. Unfortunately, most commercial STT vendors don’t make that possible (one notable exception being Deepgram). Nuance does provide a partial solution by making it possible to train a Domain Language Model (DLM) on phrases specific to the target domain. </span></li>
<li style="font-weight: 400;" aria-level="1"><b>Contextualization</b><span style="font-weight: 400;">. STT engines can conceptually recognize any user utterance, whether it’s about movies, politics, weather, music, or whatever. That’s very powerful but that’s also a liability in conversational applications, which are usually both domain-specific and highly contextual. If the virtual agent asks a user for a birthdate, then there’s a fairly good chance that the user will respond with a birthdate. The ability to take advantage of such contextual knowledge can greatly improve speech recognition accuracy. Humans do this all the time without even realizing it. Some STT engines do provide some contextualization capabilities (e.g., </span><a href="https://cloud.google.com/speech-to-text/docs/adaptation-model" target="_blank" rel="noopener"><span style="font-weight: 400;">Google STT model adaptation</span></a><span style="font-weight: 400;">), but these remain quite limited at the moment.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Optimization</b><span style="font-weight: 400;">. Traditional IVR speech recognition engines provide several effective ways to optimize accuracy. For example, big accuracy gains can be achieved by fine-tuning phonetic transcriptions, modeling intra and inter-word coarticulation, modeling disfluencies, tuning grammar weights, post-processing N-best results, etc. Most STT engines provide few, if any means to optimize accuracy.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Multilingual support</b>. Nu Echo is based in bilingual Montreal and most conversational applications we deploy need to support English words in French sentences and vice-versa (address recognition is a very good example). That can only be done effectively with a speech recognition engine capable of supporting two languages in the same utterance, a feature available in some traditional IVR speech recognition engines, but in no STT engine we know of.</li>
</ol>
<p><span style="font-weight: 400;">STT technologies are evolving extremely rapidly so we can expect continuously improving accuracy, more effective contextualization and optimization tools, as well as better access to domain-optimized models. In the meantime, the optimal solution may very well be a combination of STT and traditional IVR engines.</span></p>
<h2><span style="font-weight: 400;">Natural language understanding (NLU)</span></h2>
<p><span style="font-weight: 400;">Early speech IVR applications relied exclusively on SRGS grammars for speech recognition, so NLU was not an issue since NLU is built into the grammar.</span></p>
<p><span style="font-weight: 400;">The use of statistical language models (SLMs) created the need for a separate NLU engine, capable of understanding free-form speech recognition results. Intent detection techniques based on simple machine learning techniques were </span><a href="http://www.aclweb.org/anthology/J99-3003.pdf" target="_blank" rel="noopener"><span style="font-weight: 400;">introduced more than 20 years ago</span></a><span style="font-weight: 400;"> for the purpose of natural language call routing. These techniques have worked quite well, but they typically require a large number of sample sentences per intent to adequately train the model, which is often a big obstacle to get a system up and running.</span></p>
<p><span style="font-weight: 400;">For a very long time, these techniques didn&#8217;t evolve much. Then, deep learning totally changed the landscape for natural language processing technologies. A first impact has been the introduction of word embeddings, which improve generalizability and make it possible to greatly reduce the number of sample sentences required to train NLU models. More recently, large language models (e.g., BERT) and new neural network architectures are providing further improvements.</span></p>
<p><span style="font-weight: 400;">Note that, although the same NLU technologies are used for both text and voice conversations, there are important differences. For instance, while text conversational systems must to be able to robustly deal with typos, initialisms (eg, “lol”), emoticons, etc., voice conversational systems have to deal with homophone spelling differences (e.g., “coming” vs “cumming”, “forestcrest” vs “forest crest”, or “our our 9” vs “rr9”), undesired normalizations by the STT engine (e.g., “rr1 third concession” → “rr⅓ concession”) and, of course, speech recognition errors.</span></p>
<p><span style="font-weight: 400;">Some issues with NLU engines include:</span></p>
<ol style="margin-left: 80;">
<li style="font-weight: 400;" aria-level="1"><b>Contextualization</b><span style="font-weight: 400;">. Most NLU engines are not contextual (one exception being </span><a href="https://cloud.google.com/dialogflow/cx/docs" target="_blank" rel="noopener"><span style="font-weight: 400;">Dialogflow</span></a><span style="font-weight: 400;">), which can be a problem since the same utterance can have different interpretations depending on the context. For instance, the meaning of “Montreal” is different depending whether the question was “what’s your destination?” or “what’s the departure city?”</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Confidence scoring</b><span style="font-weight: 400;">. Effective repair dialogue requires dependable confidence scores and, unfortunately, NLU confidence scores tend not to be very good. Moreover, NLU scores usually don’t take the speech recognition confidence score into account, which is a big problem since how can we be confident in a NLU result if it’s based on a low confidence speech recognition result? In voice conversational application, effective confidence scores need to take both the STT and the NLU scores into account.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>N-best results</b><span style="font-weight: 400;">. Many NLU engines only return the best scoring intent, even when several intents have almost identical scores. Having access to N-best results makes it possible to make better dialogue decisions (e.g., disambiguation) or to choose the best hypothesis based on contextual information not available to the NLU engine.</span></li>
</ol>
<p><span style="font-weight: 400;">Natural language processing is currently one of the most active areas of research in artificial intelligence and we expect to see a continuous stream of technological advances making their way into conversational AI systems.</span></p>
<h2><span style="font-weight: 400;">Speech generation</span></h2>
<p><span style="font-weight: 400;">Text-to-speech (TTS) engines have been around for a very long time, but up until recently, the quality and intelligibility wasn’t nearly good enough to provide a good conversational experience. The best speech IVR applications therefore relied almost exclusively on prompts recorded in the studio by professional voice talents. Speech generation for sentences incorporating dynamic data was done with prompt concatenation, which is quite difficult to do well.</span></p>
<p><span style="font-weight: 400;">But we’ve recently seen such phenomenal progress in TTS technologies that it now makes sense to use TTS instead of studio recordings in most cases. That’s especially true in English, where the quality of the best TTS is such that it’s sometimes difficult to distinguish it from human speech. Moreover, it is now possible to create custom TTS voices that imitate the voice of our favorite voice talent.</span></p>
<p><span style="font-weight: 400;">The use of TTS technology really is a game changer when it comes to creating and evolving conversational IVR applications since it eliminates the need to constantly go back to the studio to record new prompts any time an application change is required and it avoids all the cumbersome, error-prone manipulations of thousands of voice segments (often in multiple languages). Now, applications can be modified, tested, and released to production almost on-the-fly.</span></p>
<p><span style="font-weight: 400;">Of course, TTS is not perfect and we still see the occasional glitches, but generally that seems like a small price to pay for the immense added value it provides. The best solution may very well be a combination of recorded audio for those key prompts where we want to get the exact intonation and emotion we’re looking for, with a custom TTS voice built from the same voice talent used in recorded prompts.</span></p>
<h2><span style="font-weight: 400;">Integration with contact center platforms</span></h2>
<p><span style="font-weight: 400;">Traditional speech IVR applications have for a long time relied on mature and time-tested standards for integrating conversational technologies. This includes </span><a href="https://tools.ietf.org/html/rfc6787" target="_blank" rel="noopener"><span style="font-weight: 400;">MRCP</span></a><span style="font-weight: 400;"> for speech recognition and text-to-speech, </span><a href="https://www.w3.org/TR/voicexml20/" target="_blank" rel="noopener"><span style="font-weight: 400;">VoiceXML</span></a><span style="font-weight: 400;"> for dialogue, </span><a href="https://www.w3.org/TR/speech-grammar/" target="_blank" rel="noopener"><span style="font-weight: 400;">SRGS</span></a><span style="font-weight: 400;"> for speech recognition grammars, and </span><a href="https://www.w3.org/TR/semantic-interpretation/" target="_blank" rel="noopener"><span style="font-weight: 400;">SISR</span></a><span style="font-weight: 400;"> for semantic interpretation.</span></p>
<p><span style="font-weight: 400;">Now, with the emergence of a new generation of Cloud contact center platforms and the arrival of the latest deep learning based technologies, all of these are being thrown out the window and replaced with a variety of proprietary APIs and some emerging standards (e.g., </span><a href="https://grpc.io/" target="_blank" rel="noopener"><span style="font-weight: 400;">gRPC</span></a><span style="font-weight: 400;">).</span></p>
<p><span style="font-weight: 400;">What this means is that the integration of these new conversational technologies with contact center platforms remains very much a work in progress, so we find that:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Support for some basic capabilities that we used to take for granted (e.g., barge-in, DTMF support) is not always where it needs to be</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The choice of available conversational technologies on many CC platforms remains limited</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Even when integrations are available, they often make it very difficult to fully take advantage of the technology’s full potential (e.g., no access to some confidence scores or N-best lists, inability to post-process STT results before sending them to the NLU engine, etc.)</span></li>
</ul>
<p><span style="font-weight: 400;">Some solutions are emerging to fill this integration gap. For instance, Audiocodes’s </span><a href="https://voiceaiconnect.audiocodes.com/" target="_blank" rel="noopener"><span style="font-weight: 400;">VoiceAI Connect</span></a><span style="font-weight: 400;"> claims to provide “easy connectivity between any CC platform and any bot frameworks or speech engine”. This could make it possible to leverage the conversational technologies that best fit the requirements of any given solution.</span></p>
<h2><span style="font-weight: 400;">The best of both worlds</span></h2>
<p><span style="font-weight: 400;">Deep learning is fundamentally impacting conversational AI technologies and this is profoundly changing the way we conceive the development of IVR applications. We are still very early in that transformation. These novel technologies are still fairly immature and are likely to evolve rapidly in the near future and so is our understanding of how to most effectively leverage them.</span></p>
<p><span style="font-weight: 400;">Nonetheless, they are already providing some very concrete and transformative benefits. For instance:</span></p>
<ol>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">It is no longer required to create complex grammars or to collect thousands of SLM training utterances to get speech recognition to work. The best speech-to-text engines provide “good enough” speech recognition accuracy out-of-the-box so it is now possible to have a system up and running quickly.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The latest NLU engines can be trained with probably an order of magnitude fewer sample sentences than with older NLU classification technologies, which also makes it possible to get a first version system up and running very quickly.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">The latest text-to-speech technologies are getting so good that it is almost no longer necessary to use recorded prompts (especially in English). This is really a game changer since it greatly shortens the time required to create and deliver a new version of the application, therefore greatly facilitating and accelerating the deployment of enhancements.</span></li>
</ol>
<p><span style="font-weight: 400;">The ability to quickly get a first application version up and running is key since it makes it possible to quickly start collecting real conversational data and utterances, which are the raw material with which the system can be continuously enhanced and optimized.</span></p>
<p><span style="font-weight: 400;">While some of the limitations of STT technologies are being addressed (e.g., in terms of contextualization, optimization, multilingual support, etc.), conversational IVR application developers should consider mixing STT with traditional IVR speech recognition technologies in order to get the best of both worlds and deliver exceptional conversational IVR user experiences (some IVR platforms, for instance the </span><a href="https://docs.genesys.com/Documentation/GVP" target="_blank" rel="noopener"><span style="font-weight: 400;">Genesys Voice Platform</span></a><span style="font-weight: 400;">, make that possible).</span></p><p>The post <a href="https://www.nuecho.com/there-is-a-new-ivr-in-town-heres-what-it-means/">There is a new IVR in town. Here’s what it means</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/there-is-a-new-ivr-in-town-heres-what-it-means/">There is a new IVR in town. Here’s what it means</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Takeaways from the VUX World Live Google Contact Centre AI with Antony Passemard</title>
		<link>https://www.nuecho.com/takeaways-from-the-vux-world-live-google-contact-centre-ai-with-antony-passemard/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=takeaways-from-the-vux-world-live-google-contact-centre-ai-with-antony-passemard</link>
		
		<dc:creator><![CDATA[Yves Normandin]]></dc:creator>
		<pubDate>Fri, 19 Mar 2021 19:16:32 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=8274</guid>

					<description><![CDATA[<p>Dialogflow CX vs. ES The interview started with a comparison between Dialogflow CX and ES. CX is not just an incremental improvement over ES. It is in fact a complete redesign, with a more powerful and more intuitive dialog model. It also has a clean separation between intents and dialogue that greatly increases intent reusability [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/takeaways-from-the-vux-world-live-google-contact-centre-ai-with-antony-passemard/">Takeaways from the VUX World Live Google Contact Centre AI with Antony Passemard</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/takeaways-from-the-vux-world-live-google-contact-centre-ai-with-antony-passemard/">Takeaways from the VUX World Live Google Contact Centre AI with Antony Passemard</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>Dialogflow CX vs. ES</h2>
<p>The interview started with a comparison between Dialogflow CX and ES. CX is not just an incremental improvement over ES. It is in fact a complete redesign, with a more powerful and more intuitive dialog model. It also has a clean separation between intents and dialogue that greatly increases intent reusability and dialogue manageability and a visual builder that can be easily used by Conversational Architects to create complex dialogues with less code.</p>
<p>According to Passemard, this had long been requested by many customers. While Dialogflow ES, which Google Cloud will continue to support and improve, is appropriate for simple dialogues, Dialogflow CX should be the platform of choice for longer and more complex dialogues. In addition, Dialoglfow CX provides several advantages over ES:</p>
<ul>
<li>More predictable (although not necessarily lower) pricing</li>
<li>Several IVR features (including barge-in, DTMF support, timeouts, retries), which were necessary in order to build conversational IVR.</li>
<li>Support for up to 40,000 intents (compared to 2,000 with ES)</li>
<li>More collaboration features that enable teams to work on large projects more efficiently</li>
<li>Better support for analytics, experiments, and feedback loops</li>
<li>A better NLU engine, based on the latest BERT model.</li>
</ul>
<p>Anybody can use Dialogflow today. However, for conversational IVR, integrating Dialogflow with a contact centre platform generally remains a challenge. Most IVR specific features require a good integration with the IVR platform and depend on events or parameters to be provided to Dialogflow, whether it is to leverage DTMF for use cases other than numerical parameters, or to use incremental no-input event handlers.</p>
<p>Passemard mentioned that some solutions, such as Audiocodes, can facilitate this integration. Interestingly, he also mentioned that it is best to stream the audio directly to Dialogflow rather than using Google STT to transcribe the audio and send the transcription to Dialogflow. The reason for this is that Dialogflow has an <a href="https://cloud.google.com/dialogflow/cx/docs/concept/speech-adaptation">Auto Speech Adaptation</a> feature that automatically optimizes the transcription accuracy based on the agent’s training phrases. That said, our own experience shows that we can often achieve as good or better results by streaming the audio directly to Google STT, using <a href="https://cloud.google.com/speech-to-text/docs/speech-adaptation">speech adaptation</a>. Moreover, it is often necessary to post-process transcription results in order to make them compatible with Dialogflow’s NLU, which is not possible when streaming audio directly to Dialogflow.</p>
<h2>Agent Assist for Voice</h2>
<p>The next topic covered in the interview was Agent Assist. This is an important topic for at least two reasons. The first is that there are very promising use cases for Agent Assist. The second is that we’ve heard a lot about CCAI Agent Assist in the past couple of years, but it’s been hard to understand exactly how to access this capability. About this last point, Passemard confirmed what we suspected: there is no public API for Agent Assist voice; Google decided to only make it available through CCAI telephony partners. As mentioned by Simms, this could be a smart business strategy. By working aggressively with telephony partners to integrate Agent Assist with their platforms and reselling only through them, Google may ensure that it becomes the de facto choice for Agent Assist.</p>
<p>The downside, however, is that enterprises are entirely dependent on the contact center vendors’ motivation and ability to make CCAI available to their customer base. It might be a while before many enterprises can leverage CCAI and, when that happens, it might require very expensive upgrades to their contact center infrastructure. For this reason, customers may end up looking for these alternative solutions that will inevitably become available.</p>
<p>This brings me to the Agent Assist use cases. Passemard mentioned that proposing relevant documents to agents based on the conversation wasn’t found to be very useful by customers. Agents don’t want to read through full documents to find the answer to the customer needs. They want extractive search, that can automatically extract the document’s relevant portion. And, we heard, that’s coming soon. What is really taking off at the moment according to Passemard is the ability to automatically fill in forms in real time with information provided by the caller. That’s really powerful. And, of course, a side benefit of Agent Assist is getting a transcription of every single call.</p>
<h2>Agent Assist for Chat</h2>
<p>Passemard said that Agent Assist for chat has been shown to provide great improvements of agent productivity and satisfaction and CSAT scores. In particular, Smart Reply and Smart Compose capabilities are provided using predictive models trained on the customer’s data, which makes them much more accurate. Agent Assist for chat is currently only available from chat vendors, but a public API is coming out soon.</p>
<h2>Insights</h2>
<p>The last CCAI capability mentioned is Insights, which is Google’s name for speech analytics. Insights is still in preview, but the good news is that it will be available to all with a public API. Insights is about understanding conversations that are happening in the contact center. Using Insights, enterprises will be able to look at conversations, index them, search through them, do topic modeling and sentiment analysis, navigate within a conversation, and perform NLU-based phrase matching (e.g., “Give me all conversations with a greeting”). Google will support a SIPREC integration.</p>
<h2>Final Notes</h2>
<p>Passemard mentioned that Conversational AI is probably the first application of AI that has a massive impact on customers. That’s an intriguing claim; it would be interesting to see some data that supports this. He also concluded by strongly advising against underestimating the value of a good Conversational Architect. We couldn’t agree more. It’s definitely not something you learn in two weeks. The very good ones have years of experience and they are critical to the success of any conversational project.</p><p>The post <a href="https://www.nuecho.com/takeaways-from-the-vux-world-live-google-contact-centre-ai-with-antony-passemard/">Takeaways from the VUX World Live Google Contact Centre AI with Antony Passemard</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/takeaways-from-the-vux-world-live-google-contact-centre-ai-with-antony-passemard/">Takeaways from the VUX World Live Google Contact Centre AI with Antony Passemard</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Mandate: Possible &#8211; The conversational application</title>
		<link>https://www.nuecho.com/mandate-converastional-job-project/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=mandate-converastional-job-project</link>
		
		<dc:creator><![CDATA[Guillaume Voisine]]></dc:creator>
		<pubDate>Thu, 08 Oct 2020 14:00:21 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=7069</guid>

					<description><![CDATA[<p>My Zoom meeting is interrupted by the doorbell. It rings four times, following the usual pattern. I know what that means. Time for a new mandate. I apologize to my fellow agents, exit the session and rush to the door. As expected, no one is there, but I notice a small package on the ground. [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/mandate-converastional-job-project/">Mandate: Possible – The conversational application</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/mandate-converastional-job-project/">Mandate: Possible &#8211; The conversational application</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>My Zoom meeting is interrupted by the doorbell. It rings four times, following the usual pattern.</p>
<p>I know what that means. Time for a new mandate.</p>
<p>I apologize to my fellow agents, exit the session and rush to the door.</p>
<p>As expected, no one is there, but I notice a small package on the ground. I pick it up and go back inside. The envelope is bare. No stamp, no address, no name, nothing.</p>
<p>I return to the couch, only to find Rusty comfortably installed exactly where I was a moment ago.</p>
<p>“Meow”, he says. Translation: “Now it’s my place, human. Deal with it.”</p>
<p>Fine. I sit beside him and tear the package open, freeing an old-school, handheld tape recorder. A familiar voice fills the air of my living room when I press Play:</p>
<p>“Good afternoon, Mr. Project Manager.”</p>
<p>I chuckle. Still not on a first name basis, after all these years?</p>
<p>“Your mandate, should you choose to accept it, is to deliver a conversational application for The Company. They wish to improve their customer experience (CX) by automating their customer service. The application must be able to answer questions with personalized responses, but also execute actions based on customer requests. It should also provide a way for live agents to take over and quickly resolve problematic conversations.”</p>
<p>This seems interesting. Obviously, I will need more information before I can do any planning, but I can already start to build the perfect team to tackle this project.</p>
<p>“The initial requirements will be forwarded to you shortly. I’m certain you will represent the Secretary to the best of your abilities in the execution of this project.”</p>
<p>Rusty, laying on his side, stretches his legs and closes his eyes. “Don’t worry, my friend. Your mandate is to sleep.” I scratch him between the ears, and the cat purrs his approbation.</p>
<p>“This recorder will self-destruct in ten seconds. Good luck.”</p>
<p>Oh no, not that again! I manage to reach my backyard, aim for the garbage bin and throw the recorder just as it starts to combust. That was close. Last time, I was not so lucky, and my house reeked of burned plastic for days.</p>
<p>Back to the couch. Rusty gives me an indignant look, his tail slowly whipping the air as I open my special binder containing the headshots of all the agents at my disposal.</p>
<p>I flip through the photos, looking for one in particular. I need an excellent communicator, like&#8230; There he is, Agent Business Analyst, to be the bridge between the client and the team. His acute sense of observation will serve him well, as he will have to understand the client’s business needs and rules. He will gather requirements from the client and help them define, specify and prioritize those requirements in order to determine which solution suits them best.</p>
<p>Throughout the project, he will leverage his comprehension of the technology and its potential applications to work with the client and the technical team to ensure that requirements are met and that the solution works as defined and as expected.</p>
<p>The very next photo pictures Agent Solution Architect. Yes, I will require her ability to have a global technical perspective on the project. Her deep knowledge of relevant and state-of-the-art technologies will help her advise the client, as well as the technical team, on the best technological choices to meet requirements and comply with any constraints the client may have. She will ensure that all the different pieces of the solution are considered and well-integrated with each other in a robust and effective whole.</p>
<p>Someone will have to define and design the conversation between the end-user and the system. That person must also be an excellent communicator, capable of interacting with all the stakeholders and members of the technical team. I turn to Rusty, who looks slightly less irritated. “What do you think?” I ask him. He yawns. Thanks for the assist, buddy. You’re perfectly right: Agent Conversational User Experience Designer (quite a mouthful. We call him CUX Designer, for short) is the perfect candidate for that. As the one responsible for the end-user experience and UI design, both for text and voice, his task will be to translate business and functional requirements into specific use cases and dialogue flows, as well as detailed functional design and messages. He will also have to validate these with the client and end-users. It will be his responsibility to ensure that the designs meet the client’s requirements, but also account for technical requirements or limitations, including automatic speech recognition (ASR) and natural language understanding (NLU).</p>
<p>Now, I’ll need to make certain that the application understands what the end-user says and correctly interprets what they mean. After all, for a conversational interface to be successful, it is essential that the user input is well understood and accurately interpreted, both globally and in context. I reach the end of the binder and start again from the beginning. Where is she? Ah, there! Agent NLU Scientist. She will work in close collaboration with Agent CUX Designer, as they represent both sides of the same coin: there has to be perfect cohesion between dialogue and NLU for the conversation to be successful. For voice applications, she will also be responsible for configuring and tuning the ASR.</p>
<p>Once the conversational agent is deployed and used by actual people, Agent NLU Scientist will also continue to play a critical role in tuning and improving its ability to understand what the user means.</p>
<p>A part of the team will need to work in materializing the requirements and designs into an actual solution that can be deployed and made accessible to end-users. This is clearly more than a one-person job. I resist the urge of asking Rusty for help again, as he’s drifting off to sleep. Okay then: I will put&#8230; Agent Software Developer, Agent Développeuse Logiciel, Agent Ohjelmistokehittäjä and Agent Softwareentwickler on that task. They are the ones who will implement the dialogue, create the access to the client’s backend systems (this is crucial if we want the application to provide personalized responses or interact with the system on behalf of the user), write unit tests and adapt existing tools like chat widgets for any particular needs of the project. Without developers, a conversational application is nothing more than a concept. Experienced developers can also provide useful feedback to designers and help create successful applications.</p>
<p>To make all the pieces work together, I also need, let’s see… Yes: Agent Integrator. Her broad range of skills, including software and general problem solving, will be instrumental to deliver a functioning solution adapted to the needs of the client. Her generalist approach will help her go through all the troubleshooting that inevitably occurs when integrating large and complex projects.</p>
<p>Nearly there. Beside me, Rusty is snoring, living his best cat dreams. I will require the valuable help of the QA Specialists Squad. They will play an essential role in making sure that the deployed application complies entirely with detailed specifications and meets all requirements. The Squad will interact with designers and developers, but also with the client, supporting them during user acceptance testing phases. They are responsible for test plans and for defining all the detailed test cases, whether manual or automated (which are essential in the context of continuous integration and continuous delivery (CI/CD)). The quality of the deployed application depends a lot on the dedication and professionalism of QA Specialists, as they are the ones who give the final go before deployment.</p>
<p>Yes, that should do it. Time to properly kick start this project. But first, a little cup of tea would be great. As I get up, I notice a thick cloud of smoke rising out of my garbage bin, in the backyard. I sigh under my breath, to avoid waking Rusty. The tea will have to wait. I must deal with that self-destructing (or rather all-destructing) recorder first.</p>
<p>Why can’t the Secretary just send emails, like normal people?</p>
<p>&nbsp;</p>
<p><em>Thank you to my colleagues Linda Thibault and Karine Déry</em></p><p>The post <a href="https://www.nuecho.com/mandate-converastional-job-project/">Mandate: Possible – The conversational application</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/mandate-converastional-job-project/">Mandate: Possible &#8211; The conversational application</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa &#8211; Part 2</title>
		<link>https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19-2/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=chatbot-rasa-artificial-intelligence-covid-19-2</link>
		
		<dc:creator><![CDATA[Karine Dery]]></dc:creator>
		<pubDate>Wed, 09 Sep 2020 19:28:37 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=7010</guid>

					<description><![CDATA[<p>Episode 1: NLU and Error Handling Flashback &#8211; Scene 4: Question Answering and Following TED As I mentioned in the first post, the goal with the Q&#38;A flow was that the user could ask a question about Covid-19 and we would display the answer returned by Mila’s model API. There have been multiple versions of [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19-2/">Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa – Part 2</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19-2/">Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa &#8211; Part 2</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>Episode 1: NLU and Error Handling</h2>
<h3><img decoding="async" class="wp-image-6914 alignnone size-full" src="https://www.nuecho.com/wp-content/uploads//2020/09/disclaimer.jpeg" alt="" width="1947" height="257" srcset="https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer.jpeg 1947w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-300x40.jpeg 300w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-1024x135.jpeg 1024w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-768x101.jpeg 768w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-1536x203.jpeg 1536w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-1080x143.jpeg 1080w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-1280x169.jpeg 1280w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-980x129.jpeg 980w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-480x63.jpeg 480w" sizes="(max-width: 1947px) 100vw, 1947px" /></h3>
<h3>Flashback &#8211; Scene 4: Question Answering and Following TED</h3>
<p>As I mentioned in the first post, the goal with the Q&amp;A flow was that the user could ask a question about Covid-19 and we would display the answer returned by Mila’s model API. There have been multiple versions of this portion of the application, from very basic to quite complex, and it was integrated in more and more places in the dialogue.</p>
<p>In the first version of the question-answering flow, the user had to choose the “Ask question” option in the main menu or after an assessment, and then we collected the question. We planned for four possible outcomes of the question-answering API call (the fourth was still not implemented in the model when our participation in the project ended):</p>
<ul>
<li>Failure: API call failed</li>
<li>Success: API call succeeded and the model provided an answer</li>
<li>Out of distribution (OOD): API call succeeded but the model provided no answer</li>
<li>Need assessment: API call succeeded but the user should assess their symptoms to get their answer</li>
</ul>
<p>If the outcome was a success, Chloe would ask an additional question to know if the answer was useful (the <a target="_blank" href="https://github.com/botfront/rasa-webchat" rel="noopener">chat widget</a> we used did not provide any kind of thumbs-up/thumbs-down UI to easily skip this interaction). If the outcome was OOD, the user was asked to reformulate.</p>
<p>Collecting the question, the reformulation, and the feedback would be done in a form, but where and how to implement the transitions to the other flows was not so clear. There were 6 different transitions after the Q&amp;A flow depending on the outcome and the presence of the “self_assess_done” slot we described earlier. We vaguely thought about asking the user what they wanted to do next inside the form, to keep Q&amp;A flow logic centralized, but the idea was discarded as we came up with no clean way to implement it, and that’s why we ended up relying on stories and the TED policy to predict the deterministic transitions.</p>
<p>We were also confronted with the problem that some of these transitions were “affirm/deny” questions, “affirm” either leading to an assessment or to asking another question. At this point, our basic assessment stories started directly with “get_assessment”, as a shortcut for memoization, and starting a story with “get_assessment OR affirm” would obviously lead to unwanted matches. We put off this inconvenience with a solution that only worked because we controlled the user input through buttons. Like this:</p>
<p><em>An intent shortcut with buttons</em></p>
<p><img decoding="async" class="aligncenter wp-image-7028 size-full" src="https://www.nuecho.com/wp-content/uploads/2020/09/1-1.bmp" alt="" width="467" height="173" /></p>
<p>This way, we did not have to add stories of assessments following a question-answering, but with a step back we should have done it then, since adding stories with Q&amp;A following an assessment worked (mostly) well, and we had to do it anyway when adding NLU.</p>
<h3>Flashback &#8211; Scene 5.25: Daily Check-In Enrollment Detours</h3>
<p>The daily check-in enrollment flow design had been augmented and included unhappy paths. These had to be addressed since the phone number and validation code (added in this version) were collected directly from user text. These are the cases we addressed:</p>
<ul>
<li>Phone number is invalid</li>
<li>User says they don’t have a phone number</li>
<li>User wants to cancel because they don’t want to give their phone number</li>
<li>Validation code is invalid</li>
<li>User did not receive the validation code and wants a new one sent</li>
<li>User did not receive the validation code and wants to change the phone number</li>
</ul>
<p>Some of these are something between a digression and error-handling, and we thought of implementing them as “controlled” digressions, as we had already done for the pre-conditions explanations, which went as follows:</p>
<p><img decoding="async" class="aligncenter wp-image-7030 size-full" src="https://www.nuecho.com/wp-content/uploads/2020/09/digression-en.gif" alt="" width="600" height="675" /></p>
<p>But since most of them involved error counters, error messages, or more complexity, we decided to manage them all inside the form instead of separating the logic between forms, stories and intent mappings. It did have downsides (other than the hundreds of lines of additional code): some logic happened over multiple interactions and we had to add many slots for counters and flags to keep track of the progression (our final version of the form uses ten such slots).</p>
<h3>Flashback &#8211; 5.75: Stopping for a Special Class for TED</h3>
<p>After some tests, it was brought to our attention that the user could get stuck in a Q&amp;A OOD loop since we only gave the option to reformulate. The design was changed so that the user could either retry or exit Q&amp;A, and we added 2 more transitions for this case.</p>
<p>Adding these, we hit a thin wall: the <a target="_blank" href="https://rasa.com/docs/rasa/core/policies/#ted-policy" rel="noopener">TED policy</a> did not learn the correct behaviour after Q&amp;As: it mixed up the impacts of the “question_answering_status” and “symptoms” slots. Re-distributing the Q&amp;A examples equally between assessments with no, mild or moderate symptoms was a clerk’s work, but it worked, and in the end, the policy predicted the correct behaviour on conversations that were not in our stories.</p>
<h3>Scene 6: Implementing Testing Sites Navigation on Autopilot</h3>
<p>Testing sites navigation, after wrestling with Q&amp;A transitions and daily-ci enrollment error-handling, brought no new challenge. The flow consisted of three major steps:</p>
<ol>
<li>Explain how it works and ask if the user wants to continue</li>
<li>If so, collect their postal code and validate its format and existence, cancelling after too many errors</li>
<li>Display the resulting testing sites or offer a second try if there were none.</li>
</ol>
<p>Coherently with our previous implementations, we used a form to collect the postal code and handle errors, make the API calls and offer the second try, and stories to display the explanations and transition to other flows. The transitions, again, varied depending on the API call outcome and on the “self_assess_done” slot.</p>
<h3>Scene 7: Exploring the Sinuous Path of NLU</h3>
<p>When we finally got to the end of the feature list the fast buttons-no-error-handling way, we could explore integrating NLU and handling unhappy paths. We started with the first input/main menu as a test. Anything that was not part of the options would be sent to the Q&amp;A API, but due to the non-contextual NLU in Rasa, and the fact that we expected a large variety of questions for the Q&amp;A, this “anything”, could be any intent, with any score. “How will we handle all these intents?” was not as trivial as it might seem.</p>
<h4>Option 1: Add examples in stories</h4>
<p>The straightforward path was to add stories with unsupported intents and the error behavior (directing to the Q&amp;A form), but how many examples would it take? The TED policy could not be expected to learn to use these error examples as catch-alls, and using ORs to include all unsupported intents would have multiplied the training time exponentially as soon as we applied this approach to other cases. This path was a dead-end.</p>
<h4>Option 2: Core fallback</h4>
<p>If we did not include the unsupported intents in stories, the TED policy would still predict something, but we could hope for the confidence score to be low, and set a threshold to trigger a fallback action. The action would replace the intent by “fallback”, and we could manage this one intent in the stories. But our expected behaviors did not all have very good scores, some not so far from what a misplaced “affirm” could get, since it was in many stories. Thus we did not want to depend on a threshold to trigger the fallback.</p>
<h4>The solution: Unsupported intent policy</h4>
<p>We ended up using the “fallback” intent idea, but with a deterministic policy. The policy predicted the action to replace the intent if the latest relevant action before the user input was the main-menu question and the intent was not in the list of supported ones. Stories and memorization were used to trigger the Q&amp;A form and manage the peculiar transitions after it (call failure and OOD were followed by a main menu error message instead of the regular messages). To achieve this, the Q&amp;A form was modified to pre-fill the question slot with the last user input if the intent was “fallback”:</p>
<p><em>Using the trigger message in the question answering form</em></p>
<p><img decoding="async" class="aligncenter wp-image-7032 size-full" src="https://www.nuecho.com/wp-content/uploads/2020/09/3-1.bmp" alt="" width="571" height="446" /></p>
<h3>Scene 8: Further Explorations</h3>
<p>As a second step, we added NLU in yes-no questions, which, per design, simply triggered a reformulated question with buttons and no text field. The majority of those were in forms, some with exceptions to the “utter_ask_{slot_name}” message convention. The exceptions also applied to the error messages, thus a generic approach that wouldn’t even apply to all cases seemed too complicated for the benefits of it, and we did not spend time thinking about one. It seemed simpler and faster to just manage everything in the forms like this:</p>
<p><img decoding="async" class="aligncenter wp-image-7034 size-full" src="https://www.nuecho.com/wp-content/uploads/2020/09/4.bmp" alt="" width="576" height="633" /></p>
<h3>Intermission: Losing the Feedback Phantom Trailing Behind Us</h3>
<p>Adding NLU, and consequently flexibility, we were reminded of the mandatory and cumbersome feedback interaction that still haunted us, and decided to make it more flexible, too. We still didn’t have a feedback widget or time to implement one, so we kept the question, but adapted the reaction: if the user answered something other than “affirm/deny”, it would be treated as if they were already in the next question, which offered to ask another question and could lead to the other functionalities. This required a bit of gymnastics to preemptively exit the form and “reproduce” the user input:</p>
<p><img decoding="async" class="aligncenter wp-image-7036 size-full" src="https://www.nuecho.com/wp-content/uploads/2020/09/5.bmp" alt="" width="576" height="682" /></p>
<h3>Scene 9: Final Sprint to Add NLU</h3>
<p style="text-align: left;">Since we already had the policy to replace intents with “fallback”, error-handling outside forms was mostly a matter of adding entries to the dictionary of latest action-supported intents, and stories to react to the “fallback” intent, either by entering the Q&amp;A form or displaying an error message to follow the design. Inside forms, we applied the same approach as for yes-no questions. We were forced to make some collateral changes, like adding a province entity, or adding stories (mostly ORs though) to manage the transitions where “affirm” or “deny” were valid (now that the buttons shortcut was unavailable). We also had to backtrack on our cleanly handled pre-conditions digression since the simple <a target="_blank" href="https://rasa.com/docs/rasa/core/policies/#mapping-policy" rel="noopener">mapping policy</a> solution could not apply with error-handling, and managed it inside the form like everything else.</p>
<p style="text-align: center;"><strong>The end</strong></p>
<p>Looking back, even though we added NLU, it seems like we took a lot of shortcuts, a lot of not-so-rasa-esque approaches. Our use case, completely predictable, with no random navigation, full of exceptions and tiny variations, did not correspond to a typical Rasa use case. We wrestled with lots of obstacles that come naturally when trying to implement a boxes-and-arrows design with Rasa. But Rasa offers flexibility through code and possible additions, and in the end, we often chose code to represent dialogue patterns because when time is short, the road we know is the safest way to end up where we want.</p>
<p>In a further installment, we will dive deeper into the different ways to implement two of the features of a boxes-and-arrows design, i.e. decision trees and dialogue modularity, that are hard to implement with Rasa, and the various methods to do so. We will also explore if and how Rasa 2.0, still in the alpha stage at the moment of writing, can make this task easier.</p><p>The post <a href="https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19-2/">Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa – Part 2</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19-2/">Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa &#8211; Part 2</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa</title>
		<link>https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=chatbot-rasa-artificial-intelligence-covid-19</link>
		
		<dc:creator><![CDATA[Karine Dery]]></dc:creator>
		<pubDate>Tue, 08 Sep 2020 18:00:00 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=6882</guid>

					<description><![CDATA[<p>Context When the confinement measures started in Canada, we were contacted by Dialogue, a telemedicine provider, to help them migrate Chloe, their Covid-19 rule-based chatbot, to a conversational chatbot, using Rasa, and add new functionalities to the bot. This would be a 10 weeks, agile, iterative project. Here are Chloe’s high-level functionalities: Self-assessment: provide personalized [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19/">Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19/">Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>Context</h2>
<p><span style="font-size: x-large;">When the confinement measures started in Canada, we were contacted by <a target="_blank" href="https://www.dialogue.co/en/" rel="noopener">Dialogue</a>, a telemedicine provider, to help them migrate Chloe, their Covid-19 rule-based chatbot, to a conversational chatbot, using <a target="_blank" href="https://rasa.com/" rel="noopener">Rasa</a>, and add new functionalities to the bot. This would be a 10 weeks, agile, iterative project.</span></p>
<p>Here are Chloe’s high-level functionalities:</p>
<ul>
<li>Self-assessment: provide personalized recommendations based on one’s symptoms and following federal and provincial recommendations</li>
<li>Question-Answering (Q&amp;A): allow the user to ask Covid-19 related questions using a <a target="_blank" href="https://github.com/dialoguemd/covidfaq" rel="noopener">model</a> developed by <a target="_blank" href="https://mila.quebec/en/" rel="noopener">Mila</a></li>
<li>Daily check-in: help the users monitor their symptoms day by day. If the user subscribed to the daily check-in, they receive a link by SMS once a day to connect to Chloe and assess the progression of their symptoms</li>
<li>Screening/testing sites navigation: using their postal code, provide user with a list of testing sites near them with <a target="_blank" href="https://clinia.com/en-ca/product/places/covid-places" rel="noopener">Clinia’s API</a></li>
</ul>
<p>The design was made iteratively as we added functionalities, and adjusted to account for comments from Dialogue’s doctors and volunteer testers, constantly moving. The implementation followed not so far behind. There are lots of possibilities on how to implement some dialogue patterns with Rasa, and with the ever-changing designs, our implementation choices often felt like navigating a labyrinth. We made it to the exit, but not without turning back a couple of times before hitting a wall or to avoid a cliff. Without an overview of the final design, we ended up with some inconsistent implementations, and without infinite time, some incomplete refactors. But this context also led us to explore paths we wouldn’t have with Rasa if we’d had the time to identify patterns and create generic components to apply them.</p>
<p>In this post and the next one of this short series, I will tell the tale of how we developed Chloe. For each step of our course, I will describe the main obstacles we faced, and the implementation decisions we made, often in the heat of the moment. In this first installment, we will mostly focus on the self-assessment and daily check-in flows.</p>
<p>&nbsp;</p>
<h3><span style="font-size: 34px; font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif;">Episode 1: Assessment Flows</span></h3>
<p><img decoding="async" class="wp-image-6914 alignnone size-full" src="https://www.nuecho.com/wp-content/uploads//2020/09/disclaimer.jpeg" alt="" width="1947" height="257" srcset="https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer.jpeg 1947w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-300x40.jpeg 300w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-1024x135.jpeg 1024w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-768x101.jpeg 768w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-1536x203.jpeg 1536w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-1080x143.jpeg 1080w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-1280x169.jpeg 1280w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-980x129.jpeg 980w, https://www.nuecho.com/wp-content/uploads/2020/09/disclaimer-480x63.jpeg 480w" sizes="(max-width: 1947px) 100vw, 1947px" /></p>
<h3><span style="font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif; font-size: 29px;">Scene 1: Sprint to the First Assessment Demo</span></h3>
<p><span style="font-size: x-large;">Very early in the project (i.e. day 8), we were asked if we could demo an assessment flow at the end of the day. When the request for a demo came, the first design was still boiling in our designer’s head; we had a running Rasa project but no dialogue implemented. Nonetheless, we rolled up our sleeves and made it happen.</span></p>
<p>The initial demo was a simple decision tree to find out the gravity of the user’s symptoms and make appropriate recommendations. We took the straightest path forward: <a target="_blank" href="https://rasa.com/docs/rasa/core/policies/#augmented-memoization-policy" rel="noopener">augmented memoization policy</a> with a <a target="_blank" href="https://rasa.com/docs/rasa/core/stories/" rel="noopener">story</a> representing each possible path. We used buttons and blocked the text input field so we wouldn’t need to train an NLU model or handle unhappy paths.</p>
<h3><span style="font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif; font-size: 29px;">Scene 2: The Path Becomes Muddy as Assessment Flows Multiply</span></h3>
<p><span style="font-size: x-large;">The next major increment in the flows was to add a distinction, at the entry point of the self-assessment flow, between three situations:</span></p>
<ul>
<li>The user thinks they might be sick and wants to assess their symptoms (initial case)</li>
<li>The user has been tested positive and wants to assess their symptoms and get advice</li>
<li>The user has done the self-assessment before and comes back to reassess their symptoms</li>
</ul>
<p>This distinction created several variations in the basic flow, such as asking if the symptoms worsened in the case of a reassessment, or starting with self-isolation recommendations for someone who tested positive.</p>
<p>We continued on the same path, adding stories to implement these two new flows, although we started noticing the quickly growing number of stories for only three flows (that would continue complexifying) and the repetitions between similar paths.</p>
<p><em>Two similar stories for a user with mild symptoms</em></p>
<p><a target="_blank" href="https://www.nuecho.com/wp-content/uploads//2020/09/2.bmp" rel="noopener"><img decoding="async" class="wp-image-6893 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2020/09/2.bmp" alt="" width="576" height="401" /></a></p>
<p>Seeing no simple solution to this in stories &#8211; <a target="_blank" href="https://rasa.com/docs/rasa/core/stories/#checkpoints-and-or-statements" rel="noopener">checkpoints and ORs</a> could not help because the similar parts are sandwiched between the different intents and the variations they create &#8211; we didn’t make any significant implementation change at this point.</p>
<p>While implementing those three flows, we needed to add a change that applied to all three: after the user says they don’t have severe symptoms, Chloe collects their province of residence and age, to make more precise recommendations. This time, the straightforward path was to put these pieces in a form: we collect information, and it is easily reusable in all our stories.</p>
<h3><span style="font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif; font-size: 29px;">Scene 3: Fast Lane to Daily Check-In Enrollment</span></h3>
<p><span style="font-size: x-large;">Moving on to another functionality, we implemented the daily check-in enrollment; if the user shows symptoms, Chloe offers the daily check-in. If the user accepts, she collects their name and phone number, notes if they have preconditions that make them more susceptible to complications, etc. This flow was also without a doubt a form. In this first simple version, even though we used free text to collect the first name and phone number, there was no real error-handling: we used the complete user input for the name and extracted all the digits of the input for the phone number, re-asking for it if there were not 10 or 11 digits.</span></p>
<h3><span style="font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif; font-size: 29px;">Scene 4: Answering Questions and Following TED</span></h3>
<p><span style="font-size: x-large;">The Q&amp;A functionality is meant to allow the user to ask any question they have about Covid-19, and send that question to the module developed by Mila, receive a response and display it to the user. We wanted to make this feature available in every flow, with different paths leading to it and different paths leading out of it depending on the type of outcome (the different types of outcome will be described, as well as the details of this feature and its evolution, in the next installment of this post).</span></p>
<p>Since Chloe would not offer an assessment if any type of assessment had already been done in the conversation, the transitions after the Q&amp;A also depended on this fact, multiplying the outward paths. Memoization wouldn’t suffice to learn this difference since we could loop in Q&amp;As over and over. Thus, we added a “self_assess_done” featurized slot combined with assessment+Q&amp;A stories and counted on the <a target="_blank" href="https://rasa.com/docs/rasa/core/policies/#ted-policy" rel="noopener">TED policy</a> to learn with few examples. It worked, but our stories file grew a lot suddenly.</p>
<h3><span style="font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif; font-size: 29px;">Intermission: Backtracking to Forms to Avoid a Stories Jungle</span></h3>
<p><span style="font-size: x-large;">Foreseeing this coming multiplication and lengthening of our stories, we decided to transfer the common part of the assessments to a form before completely integrating the Q&amp;As. This would shorten and simplify stories, but also ease the collection of slots (presence of cough or fever, newly added as a separated question, and the degree of symptoms), necessary if the user subscribed to daily check-in. A form meant less repetition but also using intermediary disposable slots to allow the tree-like filling of a single slot to cover the degree of symptoms. The slot was featurized to adapt the daily check-in offer and recommendations after the form in stories.</span></p>
<p><span style="font-size: x-large;">But this unique assessment form did not last long; the design changed while we were looking away. Two recommendation messages, about self-isolation and home assistance, were replaced by small flows with one question each. The design and implementation around these moved a lot. First, both flows were forms, inserted where the corresponding message was. Then we had to triplicate the assessment form to insert the self-isolation subflow, either before, after or in the middle of it depending on the situation (regular assessment, tested positive or reassessing). Later, the self-isolation flow was moved and modified for each situation, but we kept three separate forms to gradually include the specific questions that were left out of the common version. We kept code in common, but the “how” varied over time, and more details will be given on this subject as the question of modularity will be touched in another post.</span></p>
<p><span style="font-size: x-large;">At this point, our general model used a combination of stories, forms and actions that can be summarized as follows:</span></p>
<ul>
<li>Stories: transition between main flows and subflows, define the sequences of forms, conditions and actions that are possible for each main functionality and high-level flow</li>
<li>Forms: collect pieces of information and define decision trees, handle reusable subdialogues that include at least one question, etc.</li>
<li>Actions: various uses that do not require the collection of information, including displaying multiple messages in a row.</li>
</ul>
<p>Here is an example of a story at this moment:</p>
<p><em>Basic self-assessment flow followed by a question</em></p>
<p><a target="_blank" href="https://www.nuecho.com/wp-content/uploads//2020/09/3.bmp" rel="noopener"><img decoding="async" class="wp-image-6895 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2020/09/3.bmp" alt="" width="425" height="493" /></a></p>
<h3><span style="font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif; font-size: x-large;">Scene 5: Daily Check-In; Another type of Assessment, a Known Path</span></h3>
<p><span style="font-size: x-large;">The purpose of the daily check-in is to contact the user (who previously enrolled) everyday to assess their symptoms and, among other things, evaluate the progression of these symptoms. An initial question allows to establish which of three situations applies to the user: they feel better than the day before, they feel worse, or there were no changes. Each situation had its own decision tree, and each one of these had variations depending on the symptoms of the day before. While some questions were asked in each flow, overall, there were not enough similarities to reuse significant portions of the dialogue. Therefore, with the experience we had in implementing self-assessment flows, we knew that the better way to implement the daily check-in flows would be through three separate forms.</span></p>
<h3><span style="font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif; font-size: 29px;">Scene 5.5: Daily Check-In all the Way</span></h3>
<p><span style="font-size: x-large;">There was far more than the assessment to the daily check-in: an “invalid URL” flow (id in URL sent to the user to access the daily check-in does not exist), a one-click offer to opt-out before the assessment, another one, varying depending on that day’s symptoms, after the assessment, and a set of recommendations at the end. The invalid URL flow was added as stories because it merely directed to other features. The opt-out options were added as other forms since we collected information and had to call our database. The recommendations started as an action to remain separated as a different flow, called as a followup action when necessary in the daily-ci end keep-or-cancel form. Then we realized that <a target="_blank" href="https://rasa.com/docs/rasa/api/events/#force-a-followup-action" rel="noopener">followup actions</a> still had to appear in stories, and when we added the transitions to other functionalities, it made more sense to directly include the recommendations in the form instead.</span></p>
<h3><span style="font-family: 'Roboto Slab', Georgia, 'Times New Roman', serif; font-size: 34px;">In the Next Episode</span></h3>
<p><span style="font-size: x-large;">In this first installment, we described how we used stories and forms to implement the many variants of the self-assessment and daily check-in flows. While stories were appropriate at first to define simple decision trees with few branches, it quickly became obvious that they are not the best tool to implement complex decision trees, conditional branching or reusable subflows. We therefore had to create several forms that were embedded in stories, and rely on stories to manage higher-level flows.</span></p>
<p>In the next steps of the project, we built on the initial functionalities to add the following features:</p>
<p style="padding-left: 30px;">• We expanded and improved the Q&amp;A flows</p>
<p style="padding-left: 30px;">• We added the testing sites navigation</p>
<p style="padding-left: 30px;">• We added NLU support, first to portions of the flows, and ultimately everywhere</p>
<p>These additions brought new challenges in how we used Rasa, not only in defining and developing the dialogue but also in incorporating NLU and ensuring its performance and accuracy were adequate.</p>
<p>The next installment of this series will explore these topics.</p><p>The post <a href="https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19/">Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/chatbot-rasa-artificial-intelligence-covid-19/">Chloe: the Evolution, or Building a Covid-19 Chatbot with Rasa</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Conversational automation initiatives &#8211; Our Survey</title>
		<link>https://www.nuecho.com/conversational-automation-initiatives-our-survey/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=conversational-automation-initiatives-our-survey</link>
		
		<dc:creator><![CDATA[Stéphane Séguin]]></dc:creator>
		<pubDate>Mon, 27 Apr 2020 18:00:20 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/conversational-automation-initiatives-our-survey/</guid>

					<description><![CDATA[<p>Study on the use of conversational artificial intelligence in contact centers Nu Echo, an expert in intelligent conversational automation, is currently conducting a study to learn more about the vision and the challenges that contact centers face in adopting these new technologies. This is why we are asking for your opinion today. This survey focuses [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/conversational-automation-initiatives-our-survey/">Conversational automation initiatives – Our Survey</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/conversational-automation-initiatives-our-survey/">Conversational automation initiatives &#8211; Our Survey</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h3>Study on the use of conversational artificial intelligence in contact centers</h3>
<h3></h3>
<p>Nu Echo, an expert in intelligent conversational automation, is currently conducting a <strong>study</strong> to learn more about the vision and the challenges that contact centers face in adopting these new technologies.</p>
<p>This is why we are asking for your opinion today.</p>
<p>This survey focuses on the <strong>characteristics </strong>and <strong>activities </strong>of businesses using <strong>AI-based conversational automation systems</strong>.</p>
<p>It takes approximately<strong> 8 minutes</strong> to answer all of the questions. All data collected will remain anonymous.</p>
<p>We will publish the results of this survey so that you can access the outcome.</p>
<p>Thank you for your collaboration!</p>
<p><em style="font-size: 18px;">(Survey closed on May 31, 2020)</em></p><p>The post <a href="https://www.nuecho.com/conversational-automation-initiatives-our-survey/">Conversational automation initiatives – Our Survey</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/conversational-automation-initiatives-our-survey/">Conversational automation initiatives &#8211; Our Survey</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
