<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>IVR - AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</title>
	<atom:link href="https://www.nuecho.com/category/ivr/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.nuecho.com/category/ivr/</link>
	<description>Nu Echo</description>
	<lastBuildDate>Wed, 14 Dec 2022 19:01:23 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Call automation doesn&#8217;t have to be risky, long and costly</title>
		<link>https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=call-automation-doesnt-have-to-be-risky-long-and-costly</link>
		
		<dc:creator><![CDATA[Pierre Moisan]]></dc:creator>
		<pubDate>Wed, 14 Dec 2022 16:23:53 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Industries]]></category>
		<category><![CDATA[IVR]]></category>
		<category><![CDATA[CAI]]></category>
		<category><![CDATA[Call automation]]></category>
		<category><![CDATA[CCaaS]]></category>
		<category><![CDATA[contact center automation]]></category>
		<category><![CDATA[contact center virtual agent]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[Conversational design]]></category>
		<category><![CDATA[CX]]></category>
		<category><![CDATA[use cases virtual agents]]></category>
		<category><![CDATA[VAaaS]]></category>
		<category><![CDATA[virtual agent]]></category>
		<guid isPermaLink="false">https://www.nuecho.com/?p=9565</guid>

					<description><![CDATA[<p>As explained in a previous post on “Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent. ”, call automation solutions can help customer contact centers address several challenges at once, such as variable calls volumes and workforce shortage. In recent years, virtual agents have benefited [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/">Call automation doesn’t have to be risky, long and costly</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/">Call automation doesn&#8217;t have to be risky, long and costly</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">As explained in a previous post on “</span><a href="https://www.nuecho.com/news-events/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/"><span style="font-weight: 400;">Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent. </span></a><span style="font-weight: 400;">”, call automation solutions can help customer contact centers address several challenges at once, such as variable calls volumes and workforce shortage.</span></p>
<p><span style="font-weight: 400;">In recent years, virtual agents have benefited from outstanding technology improvements in the field of Natural Language Processing (NLP) and Artificial Intelligence (AI). </span></p>
<p><span style="font-weight: 400;">However, the complexity and the amount of effort required to leverage conversational AI platforms such as Google or Amazon still prevents many businesses from seeing a move towards virtual agents as a profitable investment. That’s where managed virtual agent solutions save the day!</span></p>
<p><span style="font-weight: 400;">By leveraging the call volumes of several customers, managed virtual agent solution providers are able to offer on-demand virtual agents much faster and at a much lower cost than the implementation of an entire conversational AI platform.</span></p>
<p><span style="font-weight: 400;">Many businesses have developed their own set of criteria when it comes to selecting an outsourced workforce but these criteria may not entirely apply when it comes to virtual agents.</span></p>
<p><span style="font-weight: 400;"><strong>When choosing a managed virtual agent solution provider, businesses should consider these 4 factors</strong>:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Integration &amp; security </span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Voice &amp; telephony experience</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Conversational design experience</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Continuously improving solution </span></li>
</ul>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">Let’s explore the details of each factor and what should be expected from providers.</span></p>
<p>&nbsp;</p>
<h2><b>Integration &amp; security</b></h2>
<p><span style="font-weight: 400;">From an integration perspective, a managed virtual agent solution is similar to outsourcing your calls which involves providing a way to transfer calls to another contact center and providing them access to the systems required for the selected  use cases. </span></p>
<p><span style="font-weight: 400;">The following figure provides a high-level architecture view of a managed virtual agent solution.</span></p>
<p><img decoding="async" class="aligncenter wp-image-9570 size-full" src="https://www.nuecho.com/wp-content/uploads/2022/12/Virtual-Agent-As-a-Service-VAaaS-Waterfield-NuEcho-Architecture.png" alt="" width="1090" height="664" srcset="https://www.nuecho.com/wp-content/uploads/2022/12/Virtual-Agent-As-a-Service-VAaaS-Waterfield-NuEcho-Architecture.png 1090w, https://www.nuecho.com/wp-content/uploads/2022/12/Virtual-Agent-As-a-Service-VAaaS-Waterfield-NuEcho-Architecture-980x597.png 980w, https://www.nuecho.com/wp-content/uploads/2022/12/Virtual-Agent-As-a-Service-VAaaS-Waterfield-NuEcho-Architecture-480x292.png 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) 1090px, 100vw" /></p>
<p><span style="font-weight: 400;">Hopefully, your provider will be able to integrate with your existing systems without requiring that  you upgrade some of them, such as your contact center platform. You will also need to review the implications of exposing some access points to an external provider.</span></p>
<p><span style="font-weight: 400;">You will need to discuss how secure the overall integration will be as customer data and their voice interactions will be shared with this provider. You need to verify that your provider will maintain customer data (in transit and at rest) within the geographies you serve. For example, you might not want to have traffic go through the US if your business is in Canada. You also need to review the security posture of your provider. Security audits and certifications (ex. SOC 2, ISO 27001) can surely help reviewing and ensuring compliance with your requirements more quickly. </span></p>
<p>&nbsp;</p>
<h2><b>Voice &amp; telephony experience</b></h2>
<p><span style="font-weight: 400;">Providing a great conversational experience on the phone channel is more than just taking a chatbot and adding speech-to-text and text-to-speech. Voice conversations are real-time synchronous communications. There are multiple factors at play: keeping response time in milliseconds to avoid silence and awkwardness, detecting accurately the end of speech and interruptions (i.e. barge-in), handling low quality audio and noise, have great recognition accuracy even when faced with accent or people hesitating or changing their mind and being able to render natural responses.</span></p>
<p><span style="font-weight: 400;">When choosing a managed virtual agent provider, you will want to make sure that they have a lot of experience with the challenges of handling phone communications. </span></p>
<p>&nbsp;</p>
<h2><b>Conversational design experience</b></h2>
<p><span style="font-weight: 400;">As mentioned regarding low satisfaction scores for some IVRs, a one-size-fits-all mentality does not usually provide a good experience. Personalisation should be part of your conversation design strategy. For example, integrating your virtual agents with a CRM can help leverage customer data and context to better understand their needs and the reason why they might be contacting you. When working with a provider, you will want to ensure that they will not just give you a cookie-cutter solution that does not provide any personalization with the conversational experience.</span></p>
<p><span style="font-weight: 400;">Focusing on the user experience should aim at reducing friction with different interactions. Unfortunately, badly designed IVRs might be too restrictive and follow a rigid structure that customers do not appreciate. While you don’t want to trick your customers into believing they are talking to a human, your goal should be to  replicate as much as possible the experience of talking to a human. Letting people express themselves more naturally and capturing the required information as they speak freely is what happens in a human-to-human conversation. </span></p>
<p><span style="font-weight: 400;">You also need to consider your engagement channels with regards to the conversational design. While having an omnichannel solution is certainly desirable, businesses need to understand distinct constraints that relate to each channel. Let’s take for example an appointment booking use case with a virtual agent. If a customer wants to book an appointment on a given day where there is a lot of availability, then you could show all the times in a widget on a chat interface and let the customer review and select the proper time. But on the phone, this strategy will fail since the number of choices will be too great to just list them verbally. A voice user interface comes with several considerations that need to be incorporated in your design. Be wary of providers that tell you their virtual agents work with any channel. </span></p>
<p>&nbsp;</p>
<h2><b>Continuously improving solution</b></h2>
<p><span style="font-weight: 400;">People are unpredictable. For a virtual agent solution, you need to plan that people might not interact with a virtual agent as expected. In addition, the needs of your customers can evolve with time. Just like human agents, virtual agents need some form of monitoring, quality assurance and training. </span></p>
<p><span style="font-weight: 400;">Therefore, it is important to understand that a virtual agent solution is not only about implementing and launching a solution. It involves monitoring, supporting, maintaining and optimizing the solution to adapt to your customers. You will want to make sure that your provider will be a good partner in constantly improving your solution. </span></p>
<p><span style="font-weight: 400;">You will also want to see how your provider can leverage the usage data to provide insights into the voice of your customers. As people express themselves naturally, this is a great opportunity to identify how you can better serve them. This can also help identify other use cases that could be automated. </span></p>
<p>&nbsp;</p>
<h2><b>Benefits of a managed solution</b></h2>
<p><span style="font-weight: 400;">Leveraging the expertise of a partner provider of virtual agents has multiple benefits. </span></p>
<p><span style="font-weight: 400;">The most significant benefit is accelerating the time-to-value of the solution. A provider will already have possible integration connectors to your systems, have designed similar conversational agents or dialogs, have optimized difficult user inputs to recognize, … This can greatly reduce a virtual agent project time from several months to just weeks. </span></p>
<p><span style="font-weight: 400;">Defining, designing, implementing and maintaining virtual agents requires a cross-functional team and a good understanding of the latest conversational AI technologies. This expertise can greatly increase the total cost of the solution and increase the required investments. A fully managed virtual agent provider can reduce these investments and make costs more predictable. </span></p>
<p><span style="font-weight: 400;">Deploying a customer facing voice virtual assistant can be risky for businesses.  According to a </span><a href="https://info.rasa.com/conversational-ai-for-customer-experience-survey-report"><span style="font-weight: 400;">Rasa survey</span></a><span style="font-weight: 400;">, 41% respondents reported that limited experience building virtual assistants was a barrier to conversational AI adoption and only 18% of respondents using voice assistants are in production with it.  A fully managed virtual agent provider can leverage its experience to ensure the successful deployment of solutions in production. </span></p>
<p><span style="font-weight: 400;"> </span></p>
<h2><b>Summing up</b></h2>
<p><span style="font-weight: 400;">Automating calls through a fully managed virtual agent solution can help contact centers service their customers for simple and repetitive tasks in order to let their human agents focus on value added calls. It involves partnering with a provider and businesses should make sure that key criteria will be fulfilled by their provider. </span></p>
<p><span style="font-weight: 400;">Nu Echo has 20+ years of experience creating conversational experiences that improves operational efficiency with an exceptional customer experience.  If you are interested in a managed virtual agent solution, then </span><strong><a href="https://www.nuecho.com/company/contact-us/">contact us today</a>. </strong></p>
<p>&nbsp;</p><p>The post <a href="https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/">Call automation doesn’t have to be risky, long and costly</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/call-automation-doesnt-have-to-be-risky-long-and-costly/">Call automation doesn&#8217;t have to be risky, long and costly</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent.</title>
		<link>https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent</link>
		
		<dc:creator><![CDATA[Pierre Moisan]]></dc:creator>
		<pubDate>Wed, 23 Nov 2022 14:36:52 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Industries]]></category>
		<category><![CDATA[IVR]]></category>
		<category><![CDATA[contact center automation]]></category>
		<category><![CDATA[contact center virtual agent]]></category>
		<category><![CDATA[Conversational AI]]></category>
		<category><![CDATA[Conversational design]]></category>
		<category><![CDATA[CX]]></category>
		<category><![CDATA[use cases virtual agents]]></category>
		<category><![CDATA[virtual agent]]></category>
		<guid isPermaLink="false">https://www.nuecho.com/?p=9547</guid>

					<description><![CDATA[<p>Getting familiar with waiting times of over an hour before you can talk to an agent? Has it gotten worse with the pandemic? What is going on with call centers?  Imagine you booked tickets to an upcoming event that you are really excited about. A few weeks later, you get an email saying that your [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/">Ladies and gentlemen, we’re experiencing some turbulence. Please hold the line while we try to find an available agent.</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/">Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent.</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><i><span style="font-weight: 400;">Getting familiar with waiting times of over an hour before you can talk to an agent? Has it gotten worse with the pandemic? What is going on with call centers? </span></i></p>
<p><span style="font-weight: 400;">Imagine you booked tickets to an upcoming event that you are really excited about. A few weeks later, you get an email saying that your event has been canceled. You try to understand your options: can I get a refund, is the event postponed, what seating will I get if I reschedule, … You browse through the Web site and cannot find answers to your questions. So you decide to call the event’s customer service and then you have to wait for almost an hour until you can finally talk to an agent. </span></p>
<p><span style="font-weight: 400;">This may sound like a familiar story. </span>Through this blogpost, I will illustrate why with a few counterexamples.</p>
<p><span style="font-weight: 400;"> These bad experiences are mostly due to 4 main trends currently affecting contact centers.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">⚠️High volumes on the phone channel</span></h2>
<p><span style="font-weight: 400;">According to </span><a href="https://cdncom.cfigroup.com/wp-content/uploads/CFI-contact-center-satisfaction-2020.pdf"><span style="font-weight: 400;">CFI group</span></a><span style="font-weight: 400;">, 76% of people reaching out to customer service choose to place a phone call.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">⚠️Unpredictable call volumes</span></h2>
<p><span style="font-weight: 400;"> According to </span><a href="https://www.talkdesk.com/blog/contact-center-holiday-season/"><span style="font-weight: 400;">Talkdesk</span></a><span style="font-weight: 400;">, 50% of retail CX professionals say the top challenge they face is high variability in the amount of customer support needed during holidays, seasonal spikes, off-season dips and others. People are also adapting to new situations such as work-from-home, virtual interactions and changing travel rules.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">⚠️Rising call complexity</span></h2>
<p><span style="font-weight: 400;">According to the </span><a href="https://hbr.org/2020/04/supporting-customer-service-through-the-coronavirus-crisis"><span style="font-weight: 400;">Harvard Business Review</span></a><span style="font-weight: 400;"> which studied the effect of the COVID pandemic on customer service, the percentage of calls scored as “difficult” more than doubled, hold times increased by as much as 34 percent and escalations by 68 percent.</span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">⚠️Increasing contact center workforce costs &amp; complexity</span></h2>
<p><span style="font-weight: 400;">According to </span><a href="https://www.cgsinc.com/en/resources/infographic-ongoing-impact-covid-19-contact-center-support-services"><span style="font-weight: 400;">CGS</span></a><span style="font-weight: 400;">, 37% of companies are not confident or only somewhat confident in their ability to maintain service levels and prevent negative effects to service levels from additional waves of COVID. </span></p>
<p>&nbsp;</p>
<h2><span style="font-weight: 400;">🌀A perfect storm for contact centers</span></h2>
<p><span style="font-weight: 400;">The combination of these trends is putting a tremendous strain on contact centers. Managing a contact center workforce has become increasingly difficult with more unpredictable call volumes combined with hiring and staffing difficulties. </span></p>
<p><span style="font-weight: 400;">Not being able to speak with a business can lead to lower customer satisfaction and engagement which can lead to loss of revenue and lower brand value. </span></p>
<p><span style="font-weight: 400;">How can contact centers keep up? Hopefully, businesses can use different strategies to mitigate the impact of these challenges. To help choose the right strategies, it is important to consider the complexity as well as the volume of each use case or categories of calls. Call automation is becoming a key element for contact centers to let virtual agents handle simple, transactional calls and leave more complex, added-value calls to human agents. </span></p>
<p><span style="font-weight: 400;">We will learn more about how contact centers can partner with providers to automate calls with reduced investments, more predictable costs and lower time to value in our soon-to-be-published article </span><em><span style="font-weight: 400;">We are currently experiencing higher than normal call volumes</span><span style="font-weight: 400;">. </span></em></p><p>The post <a href="https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/">Ladies and gentlemen, we’re experiencing some turbulence. Please hold the line while we try to find an available agent.</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/ladies-and-gentlemen-were-experiencing-some-turbulence-please-hold-the-line-while-we-try-to-find-an-available-agent/">Ladies and gentlemen, we&#8217;re experiencing some turbulence. Please hold the line while we try to find an available agent.</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Nuance Partner eXperience Summit Review: An accelerated transformation in a fluid market</title>
		<link>https://www.nuecho.com/nuance-mix-partner-experience-summit-conversationnal-ai-speech-to-text/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=nuance-mix-partner-experience-summit-conversationnal-ai-speech-to-text</link>
		
		<dc:creator><![CDATA[Yves Normandin]]></dc:creator>
		<pubDate>Mon, 02 Mar 2020 21:04:12 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Content]]></category>
		<category><![CDATA[IVR]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=6124</guid>

					<description><![CDATA[<p>Since Mark Benjamin joined Nuance as its new CEO almost two years ago, the company has been going through a breathtaking transformation. After selling its imaging division to Kofax and spinning off its automotive division, the company now focuses primarily on its core business of providing conversational AI products and solutions. Even in its core [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/nuance-mix-partner-experience-summit-conversationnal-ai-speech-to-text/">Nuance Partner eXperience Summit Review: An accelerated transformation in a fluid market</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/nuance-mix-partner-experience-summit-conversationnal-ai-speech-to-text/">Nuance Partner eXperience Summit Review: An accelerated transformation in a fluid market</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">Since Mark Benjamin joined Nuance as its new CEO almost two years ago, the company has been going through a breathtaking transformation. After selling its imaging division to Kofax and spinning off its automotive division, the company now focuses primarily on its core business of providing conversational AI products and solutions.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">Even in its core conversational AI business, <a target="_blank" href="https://www.nuance.com" rel="noopener noreferrer">Nuance</a> is fast transforming itself from a software company with a very large focus on professional services to a product and platform company. This was evident at the 2019 Partner eXperience Summit, but it was even more so at this year’s event.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">This change was of course necessary and, I might add, a bit overdue. Long the dominant vendor of enterprise speech technology solutions, Nuance is now being challenged by companies – <a target="_blank" href="https://about.google" rel="noopener noreferrer">Google</a>, <a target="_blank" href="https://www.amazon.ca" rel="noopener noreferrer">Amazon</a>, <a target="_blank" href="https://www.microsoft.com/en-ca/" rel="noopener">Microsoft</a>, and <a target="_blank" href="https://www.ibm.com/ca-en" rel="noopener noreferrer">IBM </a>among others – that offer easy to use conversational AI platforms with state-of-the-art technologies. With these platforms, the claim is that anybody can now develop sophisticated conversational AI solutions; that speech recognition (ASR) and natural language understanding (NLU) work “out-of-the-box” without any need for speech scientists; and that, in fact, you don’t even need developers to build solutions. This is the “do-it-yourself” (DIY) message and it is a compelling one.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">Of course, that message is highly misleading. Yes, to some extent, the technology now works “out-of-the-box” in the sense that it is possible to get a simple conversational demo bot up-and-running quickly. With speech-to-text (STT) engines, there is no need to write speech recognition grammars and NLU engines can be trained with a few training phrases per intent. But that’s only good for a demo. Building an effective, enterprise-grade conversational AI system is hard work, no matter what the platform is (more on that in a future blog post).</span></p>
<h1></h1>
<p><span style="font-weight: 400;">What is true, though, is that enterprises really are looking for DIY tools. And they are increasingly demanding cloud-native solutions. And, above all, they want flexibility. And Nuance has heard that message loud and clear. They now understand that it’s no longer sufficient to have best-in-class technology and a good professional services organization. Customers want to have flexible development and deployment models.</span></p>
<h1></h1>
<p>The most recent big steps that Nuance has taken in that direction are:</p>
<ul>
<li>Conversational AI APIs (launched November 2019);</li>
<li>The Nuance GateKeeper cloud based security and biometrics suite (launched October 2019);</li>
<li>Nuance Mix: DIY Tooling for partners and end users (general availability planned for end of March)</li>
</ul>
<p><span style="font-weight: 400;">The introduction of Nuance Mix, in particular, is a big change for a company that is used to directly delivering most of its conversational AI solutions through its professional services organization, using closely guarded development tools. But what we’ve seen so far of Mix is promising, with a slick, contemporary user interface. From a company that has years of experience building and deploying compelling conversational AI solutions, this is quite encouraging.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">Nuance is facing powerful new competitors, but it has many advantages. Its technology is top-notch, it has a very large installed base, it offers the most flexible deployment models (premise or cloud), its technology is integrated with most contact center platforms, and it understands better than anybody what it takes to deliver conversational experiences that work not just in demos, but in the real world. Nuance also offers the most extensive capabilities to adapt and optimize the technology for a specific domain and a specific dialog state, which is often what makes the difference between a good demo and an enterprise-grade solution.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">Another Nuance differentiator – which they position as a key element of their value proposition – is its strong professional services organization. But that could also turn out to be its Achilles&#8217; heel, because customers no longer want to be dependent on the vendor’s PS; they want to know that there is a large pool of people that are skilled on the technology and have all the tools necessary. It will be a challenge to change a company that is culturally used to delivering all the big projects into one that enables its partners and customers to do it themselves.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">In conclusion, Nuance is clearly going in the right direction and making all the right moves, but its plan is ambitious, so execution will be key. Perhaps the biggest challenge will be to implement the culture changes that are required in order to successfully implement this transformation.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">We’ve been in this market for close to 20 years and these are by far the most interesting times we’ve seen. We’re expecting quite a ride in the next few years.</span></p><p>The post <a href="https://www.nuecho.com/nuance-mix-partner-experience-summit-conversationnal-ai-speech-to-text/">Nuance Partner eXperience Summit Review: An accelerated transformation in a fluid market</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/nuance-mix-partner-experience-summit-conversationnal-ai-speech-to-text/">Nuance Partner eXperience Summit Review: An accelerated transformation in a fluid market</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Natural Language Call Steering: Your Gateway to Conversational Automation</title>
		<link>https://www.nuecho.com/natural-language-call-steering-ivr-nlu-conversational-ivr/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=natural-language-call-steering-ivr-nlu-conversational-ivr</link>
		
		<dc:creator><![CDATA[Jean-Philippe Gariépy]]></dc:creator>
		<pubDate>Thu, 12 Dec 2019 15:35:04 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVR]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=5532</guid>

					<description><![CDATA[<p>A few years ago, when one picked up the phone to reach an organization, it was mostly to ask simple questions or perform simple tasks. The phone channel was the main doorway to obtain information and services. Over the years, however, digital channels and self-services, whether on the web or mobile, have been absorbing more [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/natural-language-call-steering-ivr-nlu-conversational-ivr/">Natural Language Call Steering: Your Gateway to Conversational Automation</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/natural-language-call-steering-ivr-nlu-conversational-ivr/">Natural Language Call Steering: Your Gateway to Conversational Automation</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>A few years ago, when one picked up the phone to reach an organization, it was mostly to ask simple questions or perform simple tasks. The phone channel was the main doorway to obtain information and services. Over the years, however, digital channels and self-services, whether on the web or mobile, have been absorbing more and more of these requests. Nowadays, the phone channel is often used as a last recourse. For example, having a highly interactive conversation is easier on the phone than on digital channels. Or, when a technical issue occurs while using a self-service, one doesn’t have much choice but to pick up the phone to get human assistance to solve the problem.</p>
<p>Consequently, the phone channel now handles more complex problems, and it does so in a larger proportion than before. Since it has become the ultimate channel for these more complex or problematic customer journeys, organizations should strategically pay special attention to their phone channel. No one today would raise doubts about the relevance of offering clients a simple and efficient phone service experience to gain and maintain their loyalty.</p>
<h2><span style="font-size: 34px;">The word “conversational” on everyone’s lips</span></h2>
<p>The word “conversational” has been part of the customer experience vocabulary for a few years now; for more on that concept, you may refer to the “<a target="_blank" href="https://medium.com/cxinnovations/whaddya-mean-conversational-ivr-5e8fd0053277" rel="noopener">Whaddya mean, conversational IVR</a>?” post. Essentially, so-called conversational approaches allow users to express themselves freely by describing their issue in their own words, while supporting a low constraint dialogue structure. The role of the system in the conversation is to collect missing information while adapting to interruptions and topic changes.</p>
<p>When added to interactive voice response (IVR) systems, conversational interfaces revolutionize customer experience by allowing the caller to speak their entire request at once instead of navigating through a maze of menu options. The customer doesn’t need to adapt anymore. Rather, it’s the system that must.</p>
<h2>The wake up phone call</h2>
<p>For those who have yet to deploy conversational solutions on the phone channel, defining a roadmap for this transition is a crucial step. The strategy must consider potential business benefits, as well as technological risks and organizational impacts. So, how do we get there?</p>
<p>We need a multi-step plan that will allow the organization to progressively assimilate transformations while maximizing benefits. The organization must learn and adjust. Indeed, developing and deploying conversational systems demand that we change the way we define specifications, design the user interface, develop and test the solution. In addition, natural language understanding (NLU) requires large quantities of user data, that is, recordings of calls along with their transcriptions. This is a new but essential element that we need to take into account during project planning.</p>
<h2>First words: <em style="font-size: 34px;">natural language call steering</em></h2>
<p>As a first step in the transition towards a conversational IVR, it’s quite interesting to consider a <em>natural language call steering</em> solution (NLCS). The goal of such a solution is to allow callers to speak the reason for their call in their own words. The system then analyzes and interprets the caller’s request and routes the call to the right destination, that is, either to appropriate agents or to a self-service module. This replaces menus, at least as the main interface, keeping menus as a useful fallback strategy. Illustrated below is a scenario of a caller interacting with a NLCS solution:</p>
<p><img decoding="async" class="wp-image-5684 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2019/12/NLCS-Blogpost-Conversation-en.png" alt="" width="451" height="683" srcset="https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Conversation-en.png 451w, https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Conversation-en-198x300.png 198w" sizes="(max-width: 451px) 100vw, 451px" /></p>
<p>What level of effort is required to deploy a NLCS solution? First, we must run a data collection. On the one hand, collecting speech data from real clients allows to understand how they express themselves when they communicate with the organization. On the other hand, an inventory of products and services, internal and external terminology as well as the structure of the contact center (agent groups, skills, etc.) will help us understand the business domain.</p>
<p>Secondly, the data will be analyzed so that we can define an <em>intent catalogue</em>, that is, all the categories of customer requests/reasons for calling that can be handled by the system. For vague intents requiring a more precise classification, we will need to define a <em>disambiguation</em> strategy, which consists of defining the questions to ask the caller to help them clarify their intent. Finally, for every specific intent, we will need to determine the right destination, whether an agent group or a self-service, able to respond to the caller’s request.</p>
<p>With the data that was gathered during the data collection step, we will then train and tune the speech recognition and NLU modules to successfully recognize the utterances spoken by the callers and correctly classify their intents.</p>
<p>The IVR application for a NLCS is simple. This is partly what makes it an ideal first step in the transition towards conversational IVR. In the flow of such an application, the system asks an initial question, after which it may ask for a confirmation depending on the confidence level of the NLU module. Then, if the request is vague, a disambiguation dialogue may be necessary. Finally, based on the intent identified from the caller’s request, the call will be routed to the right destination.</p>
<p><img decoding="async" class="wp-image-5686 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2019/12/NLCS-Blogpost-Flow-en.png" alt="" width="500" height="703" srcset="https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Flow-en.png 500w, https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Flow-en-213x300.png 213w, https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Flow-en-480x675.png 480w" sizes="(max-width: 500px) 100vw, 500px" /></p>
<p>There are thus very few states in this application. This significantly simplifies design, development, as well as testing.</p>
<p>Once the application is completed, we will proceed with a pilot phase. Typically, during the pilot, the new NLCS solution will only be exposed to a fraction of the customer base. This will allow not only to measure, but also to improve the application performance. Caller utterances will be collected and will be used to enhance the NLU model. During the pilot, it would also be wise to take the opportunity to survey users on their new experience. Once adjustments are made, it will be possible to make the application available at a larger scale.</p>
<p>Following deployment in production, it is important to periodically retrieve caller data in order to improve the NLU model. These post deployment optimizations allow the application to adapt to evolving customer requests.</p>
<p>Here is a simplified view of the development process:</p>
<p><img decoding="async" class="wp-image-5688 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2019/12/NLCS-Blogpost-Process-en.png" alt="" width="700" height="1107" srcset="https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Process-en.png 700w, https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Process-en-190x300.png 190w, https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Process-en-648x1024.png 648w, https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Process-en-480x759.png 480w" sizes="(max-width: 700px) 100vw, 700px" /></p>
<h2>Harvesting the fruits</h2>
<p>Reducing the number of incorrect transfers is the main benefit of a NLCS solution and will contribute to improving agent utilization. As a consequence, contact center operational efficiency will increase. By the same token, users will feel less frustrated, which will improve not only customer but also agent experience.</p>
<p>The efficiency and user-friendliness of the conversational interface, in conjunction with the fact that clients can express themselves naturally in their own words, are additional factors that contribute to a better customer experience. The contact center being the connection between the organization and its customers, a better customer experience translates into an increased loyalty.</p>
<p>The new solution will shed light on the actual call reasons. A traditional IVR only reflects what callers select among options that the organization <em>assumes</em> they’re calling about, as opposed to unfiltered natural language utterances that can help identify recurring problems or questions that could be handled by self-services (phone or otherwise). Having this information on hand will also allow us to focus our efforts on optimizing design for the most frequent queries.</p>
<p>The NLCS also offers the possibility of regrouping many phone numbers under a single point of contact. It makes it unnecessary to separate services in order to reduce the complexity and size of the menus. All is available at the first interaction. With a single number, we avoid frustrating callers by forcing them to look for the <em>right</em> number to use or having them use the <em>wrong</em> doorway.</p>
<h2>Gaining experience</h2>
<p>The NLCS allows the organization to gain experience in several aspects of conversational IVR. For example, when working with natural language speech recognition, it is not possible to use <em>deterministic</em> technical specifications, which are typically what we do when developing traditional IVR. Language being a human phenomenon, it is impossible to anticipate all possible phrasings, which is why we resort to artificial intelligence systems that learn from input data. It is thus necessary to deal with the concept of uncertainty. The system is comprised of two steps, speech recognition and natural language understanding, each adding a degree of uncertainty to the end result.</p>
<p><img decoding="async" class="wp-image-5690 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads//2019/12/NLCS-Blogpost-Uncertainty.png" alt="" width="901" height="259" srcset="https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Uncertainty.png 901w, https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Uncertainty-300x86.png 300w, https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Uncertainty-768x221.png 768w, https://www.nuecho.com/wp-content/uploads/2019/12/NLCS-Blogpost-Uncertainty-480x138.png 480w" sizes="(max-width: 901px) 100vw, 901px" /></p>
<p>Therefore if, before, it was enough to say…</p>
<p><em>“When the user presses the Y key, the call is transferred to department X”</em></p>
<p>&#8230;we must now describe the behavior as follows:</p>
<p><em>“When the system recognizes intent Y, the call is transferred to department X”</em></p>
<p>The way the intent is recognized cannot be formally described; it must be considered as a black box. This not only impacts specifications, but also testing. We must consider the following aspects separately :</p>
<ul>
<li>The performance of intent detection, i.e. the combined performance of speech recognition and NLU</li>
<li>The application behavior following intent detection, in other words, once a given intent is recognized, the application will have to behave in a predetermined manner</li>
</ul>
<p>This principle will stay relevant throughout the evolution towards a complete conversational solution.</p>
<p>This is only an example of one of many practice aspects that require particular attention. We will also need to pay attention to performance evaluation reports, audio data collection and storage, execution of pilot experiments, post deployment tuning, as well as continuous improvement process.</p>
<h2>What next?</h2>
<p>Once the NLCS solution in place, there are many possible avenues to explore the conversational approach. We could, for instance, add a knowledge base query module. This solution would allow answering the user&#8217;s question directly instead of routing the call to an agent who would then need to run the search herself in order to answer the caller.</p>
<p>It is also possible to offer transactional self-service modules in the phone channel. We could modify the existing self-services and introduce new ones. To help identify the most relevant self-services to add, we could leverage the data obtained through the call steering solution. These self-services will allow users to speak complex requests right off the bat. Contrary to the NLCS, whose dialogue structure is simple, conversational self-services demand a sophisticated dialogue engine that can take charge of mixed initiative dialogues and adequately manage specific dialogue events like digressions, topic changes and corrections (see this <a target="_blank" href="https://www.nuecho.com/news-events/corrections-in-conversational-ivr-part-1/" rel="noopener">two-part</a> <a target="_blank" href="https://www.nuecho.com/news-events/corrections-in-conversational-ivr-part-2/" rel="noopener">post</a> on corrections).</p>
<p>Since many self-services need to know the caller’s identity to execute operations linked to their profile, we will have to implement an identification and authentication module. And to preserve the naturalness of conversational interfaces, it would be sensible to opt for a voice biometrics solution to authenticate the caller. This method, which authenticates a client simply with their voice, is user-friendly, quick and efficient.</p>
<p>Having deployed a NLCS solution will allow the organization to face new challenges with confidence. The experience acquired will help the organization to anticipate technical challenges and will allow it to discover how conversational interfaces can bring value.</p><p>The post <a href="https://www.nuecho.com/natural-language-call-steering-ivr-nlu-conversational-ivr/">Natural Language Call Steering: Your Gateway to Conversational Automation</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/natural-language-call-steering-ivr-nlu-conversational-ivr/">Natural Language Call Steering: Your Gateway to Conversational Automation</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</title>
		<link>https://www.nuecho.com/chatbots-voicebots-iva-ivr/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=chatbots-voicebots-iva-ivr</link>
		
		<dc:creator><![CDATA[Yves Normandin]]></dc:creator>
		<pubDate>Wed, 11 Dec 2019 16:21:25 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVA]]></category>
		<category><![CDATA[IVR]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=5577</guid>

					<description><![CDATA[<p>​In the past few years, we have witnessed the introduction of a bunch of new terms and expressions related to conversational systems and interfaces: chatbots, voicebots, intelligent virtual agents (IVAs), intelligent virtual assistants (IVAs), etc. Unfortunately, all of these tend to mean different things to different people, which ends up generating a lot of confusion [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/chatbots-voicebots-iva-ivr/">Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/chatbots-voicebots-iva-ivr/">Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>​In the past few years, we have witnessed the introduction of a bunch of new terms and expressions related to conversational systems and interfaces: chatbots, voicebots, intelligent virtual agents (IVAs), intelligent virtual assistants (IVAs), etc. Unfortunately, all of these tend to mean different things to different people, which ends up generating a lot of confusion in the industry.</p>
<p>In an attempt to, if not eliminate, at least reduce some of that confusion, I’ll propose some broad definitions for these terms.</p>
<p>A <strong>chatbot</strong> is an automated system with which users interact through a “chat-like” interface. This includes messaging channels such as Messenger, WhatsApp, Slack&#8230; but it also includes SMS, iMessage, as well as other chat-like interfaces such as web chats, chat widgets in mobile applications, etc. Although chatbot interactions should primarily be done through text input and output, they in practice increasingly incorporate rich media (depending on what the channel supports) such as buttons, images, carousels, webviews, etc. In reality, many chatbots have little or no support for text input, relying primarily on buttons for user input. A chatbot is not necessarily conversational (see <a target="_blank" href="https://www.nuecho.com/news-events/what-do-you-mean-conversational-ivr/" rel="noopener">here</a> for an explanation of what we mean by conversational) and in fact most chatbots are highly directed, menu driven “dialogs”.</p>
<p>A <strong>voicebot</strong> is a chatbot with which users can interact vocally. This assumes that the chatbot behind the voicebot can handle natural language input and it requires a capability to convert voice input into text (or directly into intents), as well as text output into voice. Example voicebots include any bots accessible through a voice channel, which include the now ubiquitous smart home speakers, but also the plain old telephone channel as well as any VoIP channel, for instance the call channels of Skype, Messenger, WhatsApp, Slack, etc. In that sense, a conversational IVR could be seen as a voicebot. Another example would be a Dialogflow voicebot, accessible through any voice channel, that takes advantage of Dialogflow’s ability to <a target="_blank" href="https://cloud.google.com/dialogflow-enterprise/docs/detect-intent-audio" rel="noopener">detect intent from audio</a>.</p>
<p>An <strong>Intelligent Virtual Agent (IVA)</strong> is a robot that simulates an agent (which, in this context, really means a contact center agent). It provides some of the services normally provided by a contact center agent through a communication with users – via voice or text channels – that resembles human-to-human communication. For reference, DMG defines an IVA as “<em>A system that utilizes artificial intelligence, machine learning, advanced speech technologies (including NLU/NLP/NLG) to simulate live and unstructured cognitive conversations for voice, text, or digital interactions via a digital persona.” A virtual agent can hence be a chatbot, a voicebot, or both.</em></p>
<p>An <strong>Intelligent Virtual Assistant</strong> (also IVA, unfortunately) is a system that is dedicated to helping its user, either by providing useful information or advice (weather or traffic information, financial advice, etc.), by answering questions, or by accomplishing tasks on his/her behalf (e.g., planning meetings, booking hotels, paying bills, whatever). Interaction with an intelligent virtual assistant is often done through text or voice conversational channels, which effectively makes it a chatbot or a voicebot, but it can also be done through mobile or web applications.</p>
<p>An<strong> IVR (Interactive Voice Response)</strong> is an interactive telephone system that is primarily used in a call center to steer calls to the appropriate agent, and possibly to enable callers to perform some self-service transactions. Most IVR systems today are anything but conversational, relying instead primarily on menu navigation through DTMF (touch-tone) user inputs. Several IVR systems also enable speech input, but most of these only support voice menus and directed dialogs. More recently, natural language call steering applications, which enable callers to state the purpose of their call in their own words, have gained in popularity, but that remains a very small minority of IVR systems out there. The surge in popularity of conversational systems, however, is inevitably now impacting IVR, so expect to see a rapidly increasing number of<strong> IVR voicebots</strong> being deployed in the near future.</p><p>The post <a href="https://www.nuecho.com/chatbots-voicebots-iva-ivr/">Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/chatbots-voicebots-iva-ivr/">Chatbots, Voicebots, IVA, IVR: Sorting through the confusion</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Making every call count</title>
		<link>https://www.nuecho.com/nlu-ivr-call-steering-speech-recognition-ui-ux/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=nlu-ivr-call-steering-speech-recognition-ui-ux</link>
		
		<dc:creator><![CDATA[Linda Thibault]]></dc:creator>
		<pubDate>Thu, 31 Oct 2019 14:00:10 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Content]]></category>
		<category><![CDATA[IVR]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=4303</guid>

					<description><![CDATA[<p>The ROI of a Natural Language Call Steering IVR Contact centers are still a central component of medium and large enterprises, and an essential channel for customers to connect with businesses. For a contact center to reach its full potential, it needs to meet the following business objectives: Customer service: provide customer support, deliver the [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/nlu-ivr-call-steering-speech-recognition-ui-ux/">Making every call count</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/nlu-ivr-call-steering-speech-recognition-ui-ux/">Making every call count</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>The ROI of a Natural Language Call Steering IVR</h1>
<p>Contact centers are still a central component of medium and large enterprises, and an essential channel for customers to connect with businesses. For a contact center to reach its full potential, it needs to meet the following <a href="https://smallbusiness.chron.com/business-objectives-call-center-66358.html" target="_blank" rel="noopener">business objectives</a>:</p>
<ul>
<li><strong>Customer service</strong>: provide customer support, deliver the best possible experience and generate positive word of mouth</li>
<li><strong>Customer retention</strong>: retain existing customers and boost customer loyalty</li>
<li><strong>Operational efficiency </strong>: by optimizing average interaction and resolution time</li>
</ul>
<p>Contrary to popular belief, the <a href="https://financesonline.com/call-center-statistics/#channels" target="_blank" rel="noopener">telephone is still a widely used tool</a> to connect with enterprises. While customers appreciate being able to self-serve and perform transactions on their own—whether on their mobile phone or on the web—they still turn to contact centers for assistance when they have more complex issues or are having a hard time with self-service. In both cases, the customer is likely to call your company looking to <a href="https://hbr.org/2017/07/your-customers-still-want-to-talk-to-a-human-being" target="_blank" rel="noopener">speak with a real person</a>. And when they do, you want to be sure you make a strong impression!</p>
<p>Many enterprises today still rely on traditional touch-tone IVRs for call routing and to establish customer intent. But touch-tone menus are often complex, with multiple steps and options, confusing terminology and long wait times. Frustrated with the experience, customers end up hitting zero or selecting an incorrect option (any option!) hoping to reach an agent. This results in a high volume of misrouted calls, with customers having to repeat themselves and agents wasting valuable time talking to customers they can’t help.</p>
<blockquote><p><em>These frustrating experiences, for both the caller </em><em>and the agent, come at a cost: <a href="https://www.newvoicemedia.com/en-us/news/the-62-billion-customer-service-scared-away" target="_blank" rel="noopener">$62 billion in the U.S. alone in 2016.</a></em></p></blockquote>
<p>So how can you improve your customer experience in the IVR while optimizing your agent resources? For starters, by asking customers why they’re calling, instead of having them go through a bunch of menu options trying to get their issue resolved.</p>
<p>Natural Language Call Steering (NLCS) IVRs use natural language understanding and automatic speech recognition to listen to callers, interpret what they’re saying and understand what they need.</p>
<p>By precisely categorizing the caller’s request, the NLCS can route the caller to the right agent on the very first try, most times in a single interaction. This serves to reduce the number of transfers and means callers don’t have to repeat themselves. Not only that, but the IVR can transfer the information it collects directly to the agent, giving them a head’s up as to why the customer is calling.</p>
<p>Making the transition from a touch-tone IVR to an NLCS solution is an investment, but one that will generate great returns. And sticking with the alternative can be extremely costly for companies in the long term, resulting in:</p>
<ul>
<li><strong>High call transfer rates</strong>, a costly issue that wastes valuable agent resources</li>
<li><strong>Customer frustration</strong>, which again wastes agent resources (with customers spending more time expressing their frustration than discussing their needs) and leads to low agent morale/high employee turnover</li>
<li><strong>Poor customer retention</strong>, with existing customers often leaving the company after a single negative experience</li>
<li><strong>Loss of potential customers</strong> if the IVR experience is negative and you fail to make a good first impression</li>
</ul>
<p>NLCS solutions result in fewer misrouted calls, reduced route times, fewer abandoned calls and better agent utilization—so you can improve your customer experience and <a href="https://www.carahsoft.com/application/files/7315/3235/1731/NUAN-CS-1104-01-DS_Call_Steering_r1.pdf" target="_blank" rel="noopener">reduce your operational costs</a>. And the benefits don’t stop there. NLCS IVRs give you the opportunity to:</p>
<ul>
<li><strong>Identify new self-service opportunities</strong>, by undersanding why people are calling, we can identify opportunities to address their issue right in the IVR, therefore greatly reducing the volume of calls into the contact center</li>
<li><strong>Offer simplified access to your services,</strong> by consolidating all your services in a single phone number</li>
<li><strong>Gather key data with analytics,</strong> giving you better insights into your customers, products and services</li>
</ul>
<p>NLCS IVR solutions are a game changer, leveraging the latest conversational technologies to deliver an intuitive, human-like caller experience. Bring your voice channel into the future to improve your customer experience, boost customer loyalty, optimize your agent resources—and make every call count.</p>
<p><em>Many thanks to my colleague Annie Brasseur for her invaluable contribution to this post, and to Yves Normandin for his insightful comments.</em></p>
<p><em>Download our <a href="https://www.nuecho.com/wp-content/uploads/2019/10/one-pager.pdf" target="_blank" rel="attachment noopener wp-att-4406">one pager</a>!</em></p><p>The post <a href="https://www.nuecho.com/nlu-ivr-call-steering-speech-recognition-ui-ux/">Making every call count</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/nlu-ivr-call-steering-speech-recognition-ui-ux/">Making every call count</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Developing Conversational IVR Using Rasa Part 3: Dialogue Management</title>
		<link>https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-3-dialogue-management/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=developing-conversational-ivr-using-rasa-part-3-dialogue-management</link>
		
		<dc:creator><![CDATA[Laurence Dupont]]></dc:creator>
		<pubDate>Tue, 23 Jul 2019 13:00:57 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVR]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=4167</guid>

					<description><![CDATA[<p>Dialogue requirements Our banking IVR is a self-service application in which a caller can execute tasks like paying a bill or getting an account balance. The dialogue includes a loop that allows the caller to carry out tasks as many times as he or she wants: &#160; The application accommodates expert users by allowing them [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-3-dialogue-management/">Developing Conversational IVR Using Rasa Part 3: Dialogue Management</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-3-dialogue-management/">Developing Conversational IVR Using Rasa Part 3: Dialogue Management</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Dialogue requirements</h1>
<p>Our banking IVR is a self-service application in which a caller can execute tasks like paying a bill or getting an account balance. The dialogue includes a loop that allows the caller to carry out tasks as many times as he or she wants:</p>
<p><img decoding="async" class="wp-image-4171 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/07/1-1.jpg" alt="" width="499" height="566" /></p>
<p>&nbsp;</p>
<p>The application accommodates expert users by allowing them to quickly complete their tasks:</p>
<p><img decoding="async" class="wp-image-4173 size-full alignnone" style="display: block; margin-left: auto; margin-right: auto;" src="https://www.nuecho.com/wp-content/uploads/2019/07/2-1.jpg" alt="" width="479" height="207" /></p>
<p>&nbsp;</p>
<p>It also adapts to less experienced users by providing them with a more directed dialogue when needed:</p>
<p><img decoding="async" class="wp-image-4175 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/07/3-1.jpg" alt="" width="487" height="348" /></p>
<p>&nbsp;</p>
<p>The IVR supports mixed initiative strategies like digressions:</p>
<p><img decoding="async" class="wp-image-4177 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/07/4-1.jpg" alt="" width="486" height="335" /></p>
<p>&nbsp;</p>
<p>As well as change requests or corrections:</p>
<p><img decoding="async" class="wp-image-4179 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/07/5.jpg" alt="" width="476" height="212" /></p>
<p>&nbsp;</p>
<p>The dialogue also handles error recovery, including errors that are specific to the voice channel:</p>
<p><img decoding="async" class="wp-image-4181 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/07/6.jpg" alt="" width="480" height="613" /></p>
<p>&nbsp;</p>
<p>This is only an overview of the patterns handled by our dialogue model; several others are already implemented. We are also planning on adding other dialogue strategies, such as cancelling a task or handling more complex <a href="https://www.nuecho.com/news-events/corrections-in-conversational-ivr-part-1/" target="_blank" rel="noopener">corrections</a> and <a href="https://www.nuecho.com/news-events/corrections-in-conversational-ivr-part-2/" target="_blank" rel="noopener">change requests</a>.</p>
<h1>Implementation: why we opted for a deterministic approach</h1>
<p>Early on, we decided to rely on a deterministic approach for our use cases. The main reasons why machine learning was less adapted to our current use cases are as follows:</p>
<ul>
<li>Task related dialogues are predictable and relatively directed</li>
<li>Tasks can be executed repeatedly and the application behavior should be identical every time</li>
<li>Tasks should be interrupted and resumed reliably</li>
<li>Tokens of information that are collected for a given task are not relevant for another</li>
<li>Tasks are independent and must not interact with each other</li>
</ul>
<p>In addition, our requirements for error recovery and change management were rather strict.</p>
<h1>Deterministic approach</h1>
<p>Our deterministic approach consists in managing actions using a stack. A stack is a data structure of type last in first out; in other words, the last item added to the stack is the first one to be removed. The action at the top of the stack is the one in focus. When we add an action to a stack, it becomes in focus. When the action in focus is completed, it is removed from the stack and the dialogue goes back to the previous action. This allows to interrupt and resume actions in a predictable and robust manner.</p>
<p>This can be illustrated as follows:</p>
<p><img decoding="async" class="wp-image-4183 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/07/7.jpg" alt="" width="797" height="190" /></p>
<p>&nbsp;</p>
<p>Here is an example of the state of a stack for a dialogue in which the user interrupts the ongoing action (digresses) to ask for a list of their accounts. Once the digression to hear the list of accounts is completed, this action is removed from the stack and the focus comes back to the previous action, that is, the bill payment action.</p>
<p><img decoding="async" class="wp-image-4208 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/07/88.jpg" alt="" width="638" height="802" srcset="https://www.nuecho.com/wp-content/uploads/2019/07/88.jpg 638w, https://www.nuecho.com/wp-content/uploads/2019/07/88-239x300.jpg 239w" sizes="(max-width: 638px) 100vw, 638px" /></p>
<p>In addition to allowing us to manage digressions elegantly, our approach allows us to define isolated contexts for each action. This ensures that they remain independent.</p>
<p>Next are some additional details on how this was implemented using Rasa.</p>
<h3>Custom policy</h3>
<p>In Rasa, a <a href="https://rasa.com/docs/rasa/core/policies/" target="_blank" rel="noopener">policy</a> is what allows to predict/specify the next <a href="https://rasa.com/docs/rasa/core/actions/" target="_blank" rel="noopener">action</a> to be executed in the dialogue depending on context (<a href="https://rasa.com/docs/rasa/api/events/#force-a-followup-action" target="_blank" rel="noopener">tracker</a>). Out-of-the-box, Rasa offers a combination of deterministic and machine learning based policies. It is also possible to create our own policy. This is what we did by implementing a deterministic policy that alternates between waiting for the next user input and triggering an action used as an action manager (responsible for managing the action stack). Since we do not make predictions as to what action must be executed, our policy does not use <a href="https://rasa.com/docs/rasa/core/stories/" target="_blank" rel="noopener">stories</a> and does not require a training phase. This is the only policy that we use.</p>
<h3>Action manager</h3>
<p>We have created an abstraction to manage the action stack that was described above. The stack is a complex object that we store in an <a href="https://rasa.com/docs/rasa/core/slots/#unfeaturized-slot" target="_blank" rel="noopener">unfeaturized slot</a>. The action to be executed depends on the user’s input as well as the state of the stack.</p>
<h3>Custom dialogue patterns</h3>
<p>One frequent dialogue pattern is information token collection, or slot-filling. This pattern is used, for instance, by the bill payment action. Rasa provides an action for this pattern: the <a href="https://rasa.com/docs/rasa/core/forms/" target="_blank" rel="noopener">FormAction</a>. However, we needed to support other, more complex patterns than what is offered by the FormAction, for example: slot confirmation when the speech recognition confidence score is low, final confirmation at the end of an action, etc. We have therefore created a custom class “Task” that handles these more complex patterns. Some of our actions inherit this class. We appreciate that Rasa offers the flexibility that we need to implement our own dialogue management strategies.</p>
<h3>Unit test framework</h3>
<p>Since we have implemented our dialogues using a deterministic approach, we were able to build our own unit test framework to test our dialogues. This allowed us to increase our application’s reliability.</p>
<h1>Next steps</h1>
<p>Although we have been relying on a deterministic approach to develop our banking use cases, we are also currently experimenting with machine learning to develop dialogues for different use cases.</p>
<ul>
<li>Here are some of the next items that we will explore:</li>
<li>Use the <a href="https://rasa.com/docs/rasa/core/policies/#embedding-policy" target="_blank" rel="noopener">Recurrent Embedding Dialogue Policy</a> to support <a href="https://blog.rasa.com/attention-dialogue-and-learning-reusable-patterns/" target="_blank" rel="noopener">uncooperative user behavior</a></li>
<li>Use <a href="https://rasa.com/docs/rasa-x/" target="_blank" rel="noopener">Rasa X</a> to learn from real conversations</li>
<li>Create more natural dialogues by using recorded prompts instead of TTS</li>
<li>Try to integrate machine learning in our deterministic model</li>
</ul>
<p>We will share our experience in future posts as we move forward. Stay tuned!</p><p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-3-dialogue-management/">Developing Conversational IVR Using Rasa Part 3: Dialogue Management</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-3-dialogue-management/">Developing Conversational IVR Using Rasa Part 3: Dialogue Management</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Developing Conversational IVR Using Rasa Part 2: The Rivr Bridge</title>
		<link>https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-2-the-rivr-bridge/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=developing-conversational-ivr-using-rasa-part-2-the-rivr-bridge</link>
		
		<dc:creator><![CDATA[Karine Dery]]></dc:creator>
		<pubDate>Tue, 09 Jul 2019 13:00:31 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVR]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=4118</guid>

					<description><![CDATA[<p>On the Rivr Bridge&#8230; I know, there’s only one question on your mind right now: “What in the world is that name?”. First off, “Rivr” is because it uses Rivr, subtly mentioned in the first post, which is a Nu Echo created, open-sourced framework to write VoiceXML applications, entirely in Java. Then “Bridge” because it [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-2-the-rivr-bridge/">Developing Conversational IVR Using Rasa Part 2: The Rivr Bridge</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-2-the-rivr-bridge/">Developing Conversational IVR Using Rasa Part 2: The Rivr Bridge</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>On the Rivr Bridge&#8230;</h1>
<p>I know, there’s only one question on your mind right now: “What in the world is that name?”. First off, “<a href="https://github.com/nuecho/rivr" target="_blank" rel="noopener">Rivr</a>” is because it uses Rivr, subtly mentioned in the first post, which is a <a target="_blank" href="https://www.nuecho.com/" rel="noopener">Nu Echo</a> created, open-sourced framework to write VoiceXML applications, entirely in Java. Then “Bridge” because it links a VoiceXML platform with the chosen dialogue engine. And yes, the pun was intended (but not by me).</p>
<p>But the real question is: “What does it do?”. As I said, Rivr is a framework to develop full-fledged applications, but the Rivr Bridge’s goal is only to translate what comes in and out of the VoiceXML platform and throw it to the Rasa side of the world in a digestible format. For instance, a classic Rivr application would programmatically process each user input and define the next dialogue steps, unlike the Rivr Bridge, which would query the chosen dialogue engine to decide the next dialogue steps. Adapting the model was simple, maybe even simpler than we thought. It roughly looks like this:</p>
<p><img decoding="async" class="wp-image-4128 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/07/1.jpg" alt="" width="1059" height="613" srcset="https://www.nuecho.com/wp-content/uploads/2019/07/1.jpg 1059w, https://www.nuecho.com/wp-content/uploads/2019/07/1-300x174.jpg 300w, https://www.nuecho.com/wp-content/uploads/2019/07/1-768x445.jpg 768w, https://www.nuecho.com/wp-content/uploads/2019/07/1-1024x593.jpg 1024w" sizes="(max-width: 1059px) 100vw, 1059px" /></p>
<p>The great advantage of using the Rivr Bridge is that it interprets the VoiceXML platform’s input and generates bulletproof VoiceXML. For reusability purposes, we decided to make the Bridge platform-agnostic and application-agnostic, and let an IVR channel on the Rasa side manage the Rasa-specific aspects, which would allow us to eventually plug in other dialogue engines.</p>
<p>Here is an artistic representation of our input pipeline:</p>
<p><img decoding="async" class="aligncenter wp-image-4130 size-full" src="https://www.nuecho.com/wp-content/uploads/2019/07/2.jpg" alt="" width="1046" height="386" srcset="https://www.nuecho.com/wp-content/uploads/2019/07/2.jpg 1046w, https://www.nuecho.com/wp-content/uploads/2019/07/2-300x111.jpg 300w, https://www.nuecho.com/wp-content/uploads/2019/07/2-768x283.jpg 768w, https://www.nuecho.com/wp-content/uploads/2019/07/2-1024x378.jpg 1024w" sizes="(max-width: 1046px) 100vw, 1046px" /></p>
<h1>… Through an IVR JSON Protocol&#8230;</h1>
<p>To better define the content of the requests and responses exchanged by the Rivr Bridge and the IVR channel, we designed a generic JSON protocol that could represent all necessary information for a conversational IVR application using VoiceXML. The protocol describes 5 types of input, namely: data (initialization data for example; caller’s phone number or any information the platform is set to return), user input (vocal or using the keypad) recognition/interpretation result, recording (of the user’s voice), transfer details (status, duration, etc.), event (hangup, noinput, nomatch…). Concerning outputs, we only designed support for interaction (the dialogue asks for a user input) and exit/hangup to cover our use cases.</p>
<p>As an example, to ask a question and wait for the answer, the dialogue could send this payload:</p>
<p><img decoding="async" class="wp-image-4162 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/07/Webp.net-resizeimage-1.jpg" alt="" width="811" height="896" srcset="https://www.nuecho.com/wp-content/uploads/2019/07/Webp.net-resizeimage-1.jpg 811w, https://www.nuecho.com/wp-content/uploads/2019/07/Webp.net-resizeimage-1-272x300.jpg 272w, https://www.nuecho.com/wp-content/uploads/2019/07/Webp.net-resizeimage-1-768x848.jpg 768w" sizes="(max-width: 811px) 100vw, 811px" /></p>
<p>And the result sent by the Bridge could be:</p>
<p><img decoding="async" class="aligncenter wp-image-4155 size-full" src="https://www.nuecho.com/wp-content/uploads/2019/07/Capture.jpg" alt="" width="811" height="652" srcset="https://www.nuecho.com/wp-content/uploads/2019/07/Capture.jpg 811w, https://www.nuecho.com/wp-content/uploads/2019/07/Capture-300x241.jpg 300w, https://www.nuecho.com/wp-content/uploads/2019/07/Capture-768x617.jpg 768w" sizes="(max-width: 811px) 100vw, 811px" /></p>
<h1>… To the IVR Channel</h1>
<p>Not a lot was then left for the IVR channel to do. Concerning inputs, each one would need some processing to be made accessible to the dialogue management. Specifically, inputs have to fit into Rasa’s NLU result format (namely, a string following the template: `intent@confidenceScore{“entityType”: entityValue, &#8230;}`). With well written <a href="https://en.wikipedia.org/wiki/Speech_Recognition_Grammar_Specification" target="_blank" rel="noopener">grammars</a>, this step’s implementation was rather simple for recognition results, but could have been tricky for input types with no intent nor entities (data, events), for which we still wanted to trigger a dialogue turn. To solve that problem, we could either create synthetic intents and entities representing the information we wanted to pass on, or insert it directly in the <a href="http://rasa.com/docs/rasa/user-guide/architecture/">tracker</a> and send a semantically empty input. We went for the first option, and created four synthetic intents to date:<br />
start_conversation (with a data entity containing initialization data as a JSON object)<br />
&#8211; noinput<br />
&#8211; nomatch<br />
&#8211; hangup</p>
<p>For the outputs, yet again some formatting was necessary, but since Rasa gives us full liberty on the output content through <a href="http://rasa.com/docs/rasa/core/domains/#custom-output-payloads">custom payloads</a>, this was pretty straightforward. The (tiny bit more) delicate work was to concatenate and validate outputs from different parts of the dialogue. Rivr supports playing messages alone (without a recognition or hangup step), and it could be a nice feature for our Rasa dialogues, but would have required a bit more gymnastics in both the channel and the Bridge, so we chose not to implement it for now.</p>
<p>Ok, presenting it like that, maybe the IVR channel had a lot to do even with the use of the Rivr Bridge. But it was still less than generating VoiceXML content would have been. Thanks Rivr! To discover the journey of those user inputs once they enter the Rasa ocean, read the yet-to-come rest of the series!</p><p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-2-the-rivr-bridge/">Developing Conversational IVR Using Rasa Part 2: The Rivr Bridge</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa-part-2-the-rivr-bridge/">Developing Conversational IVR Using Rasa Part 2: The Rivr Bridge</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Developing Conversational IVR Using Rasa</title>
		<link>https://www.nuecho.com/developing-conversational-ivr-using-rasa/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=developing-conversational-ivr-using-rasa</link>
		
		<dc:creator><![CDATA[David Morand]]></dc:creator>
		<pubDate>Wed, 26 Jun 2019 17:51:35 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVR]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=4066</guid>

					<description><![CDATA[<p>The lasting relevance of VXML Since Nu Echo’s foundation in 2002, VoiceXML has been the bread and butter of our IVR application development. We’ve been using it to create many solutions ranging from turnkey address change and identity validation modules to large scale IVR system using a custom JavaScript framework. We even open-sourced a Java [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa/">Developing Conversational IVR Using Rasa</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa/">Developing Conversational IVR Using Rasa</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>The lasting relevance of VXML</h2>
<p>Since Nu Echo’s foundation in 2002, <a href="https://en.wikipedia.org/wiki/VoiceXML" target="_blank" rel="noopener">VoiceXML</a> has been the bread and butter of our IVR application development. We’ve been using it to create many solutions ranging from turnkey address change and identity validation modules to large scale IVR system using a custom JavaScript framework. We even open-sourced a Java VoiceXML development framework named <a href="https://github.com/nuecho/rivr" target="_blank" rel="noopener">Rivr</a>, check it out!</p>
<p>While it is definitely possible to develop IVR applications in 2019 without using VoiceXML (think <a href="https://dialogflow.com/" target="_blank" rel="noopener">Dialogflow</a> or <a href="https://aws.amazon.com/lex/" target="_blank" rel="noopener">Amazon Lex</a> through <a href="https://aws.amazon.com/connect/" target="_blank" rel="noopener">Amazon Connect</a>), it is still very prevalent in the large contact centers, which are an important part of our customer base. That’s why we decided to give Rasa a serious spin to find out if it could be a viable solution for developing multilingual conversational IVR applications using VoiceXML.</p>
<p>&nbsp;</p>
<h2>The use cases</h2>
<p>For our proof of concept, we selected some banking use cases (account balance and pay bill) that offer us interesting dialogue patterns like digressions, confirmations, <a target="_blank" href="https://www.nuecho.com/news-events/corrections-in-conversational-ivr-part-1/" rel="noopener">corrections </a>and global commands (cancelling a task for example).</p>
<p>Here is an example that mixes some of those patterns:</p>
<p><img decoding="async" class="wp-image-4106 aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/06/sample_dialogue_annotated-1.jpg" alt="" width="864" height="574" srcset="https://www.nuecho.com/wp-content/uploads/2019/06/sample_dialogue_annotated-1.jpg 899w, https://www.nuecho.com/wp-content/uploads/2019/06/sample_dialogue_annotated-1-300x199.jpg 300w, https://www.nuecho.com/wp-content/uploads/2019/06/sample_dialogue_annotated-1-768x510.jpg 768w" sizes="(max-width: 864px) 100vw, 864px" /></p>
<p>&nbsp;</p>
<h2>Introducing Rasa IVR</h2>
<p>Developing VoiceXML IVR applications using Rasa offers interesting challenges. For one, the real-time aspect of a voice conversation that must progress if the user says nothing is quite different from the classic chatbot approach. The application can’t keep the user waiting if he says nothing, it must propose <a href="https://www.nuecho.com/news-events/dialogflow-distilled-on-error-handling/">alternate and more detailed messages</a> and eventually terminate the conversation if the user decides to stay silent.</p>
<p>While Rasa offers a lot of <a href="http://rasa.com/docs/rasa/user-guide/messaging-and-voice-channels/" target="_blank" rel="noopener">prebuilt channels</a>, nothing exists to express the richness of VoiceXML and interpret its different outputs. As you can see from the complexity of its <a href="https://www.w3.org/TR/voicexml20/" target="_blank" rel="noopener">specification</a>, a lot must be done to cover all the functionalities (although some of them are less used than others). Some of the most important ones that must be taken into consideration for developing basic use cases are related to constructing the output (audio files / speech synthesis), activating <a href="https://www.w3.org/TR/voicexml20/#dml4.1.5" target="_blank" rel="noopener">bargein</a>, specifying grammars / input mode (speech and/or <a href="https://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling" target="_blank" rel="noopener">DTMF</a>), and configuring confidence levels / timeouts.</p>
<p>The user experience (UX) must also be tailored for the voice channel since the user cannot scroll up to access the whole conversation (unless he has a supernatural memory). Some patterns like confirmation or choosing in a list are much trickier to properly implement using voice than text and widgets.</p>
<p>Along with those challenges come some interesting opportunities. For example, using automatic speech recognition (ASR) alongside contextual grammars allows us to greatly improve the recognition accuracy by giving a greater weight to the most probable responses. VoiceXML also offers many functionalities for a better integration to the contact center, which must be exposed (agent transfer, attached data, recordings). The synchronous aspect of the conversation also simplifies the implementation since the user can’t frantically send multiple (sometimes contradictory) inputs.</p>
<p>As I said earlier, this post is the first of a series that will cover different aspects of the making of our conversational banking application proof of concept. Stay tuned for more articles from my colleagues on our approach toward generating VoiceXML, dialogue management, cloud deployment and more!</p>
<p>Many thanks to my colleagues <a href="https://medium.com/@linda.thibault" target="_blank" rel="noopener">Linda Thibault</a> and <a href="https://medium.com/@guillaume.voisine" target="_blank" rel="noopener">Guillaume Voisine</a> for their precious advice!</p><p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa/">Developing Conversational IVR Using Rasa</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/developing-conversational-ivr-using-rasa/">Developing Conversational IVR Using Rasa</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How Far Can You Go? Managing Expectations in Conversational IVR Demos</title>
		<link>https://www.nuecho.com/how-far-can-you-go/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=how-far-can-you-go</link>
		
		<dc:creator><![CDATA[Linda Thibault]]></dc:creator>
		<pubDate>Mon, 27 May 2019 16:56:41 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[IVR]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/?p=3638</guid>

					<description><![CDATA[<p>As conversational IVR designers and developers, our team is often asked to provide demos as a communication and sales tool with prospective clients. When a fully functioning interactive demo is needed, putting it together can require weeks of effort and involve many people with various skills. That can quickly amount to a significant budget. In [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/how-far-can-you-go/">How Far Can You Go? Managing Expectations in Conversational IVR Demos</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/how-far-can-you-go/">How Far Can You Go? Managing Expectations in Conversational IVR Demos</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As conversational IVR designers and developers, our team is often asked to provide demos as a communication and sales tool with prospective clients. When a fully functioning interactive demo is needed, putting it together can require weeks of effort and involve many people with various skills. That can quickly amount to a significant budget.</p>
<p>In other situations, we must rather produce canned demos, whose purpose is to “wow” potential clients by showcasing inspiring use cases and state of the art functionalities &#8211; even before the actual development takes place.</p>
<p>Between these two extremes, other types of demos can serve various purposes and can allow potential or existing clients to try, test and appreciate what a conversational IVR can do.</p>
<p>Different types of demos serve different purposes, require various amounts of effort and budget, allow diverse levels of hands on experience and involvement from those who try them, and present varying degrees of risk. Investing in the right type of demo for the right usage and the right prospect can be a determining factor in the success of that demo.</p>
<p>In the following paragraphs, I will describe four categories of demos, from the least to the most interactive, with their main characteristics, proposed usage, advantages and risks.</p>
<p>&nbsp;</p>
<h3><strong>The canned demo</strong></h3>
<p>This is the type of demo that you often find on Youtube, where overly happy young people have an almost human conversation with the IVR. It recognizes the caller, understands everything, asks the right questions, reacts naturally when interrupted, and sometimes even risks a joke.</p>
<p>This type of demo is great to showcase the possibilities of a conversational IVR, to wow potential clients and make them dream, but also to provide decision makers with a vision of what their IVR could do for them, if they hired you. The canned demo is ideal to approach new prospects without investing a lot and without being too intrusive. Sales can safely use canned demos in early discussions with clients, and these demos can live on your website and be sent to anyone who’s interested in knowing about the potential of a conversational IVR.</p>
<p>On the flip side, canned demos can easily generate unrealistic expectations when the scenarios are too far from what your company can actually, realistically deliver. Overpromising is a risk, as well as making things look way easier than what they actually are to design and develop.</p>
<p>&nbsp;</p>
<h3><strong>The scripted demo</strong></h3>
<p>The scripted demo consists of a working IVR that only supports a very finite number of scenarios, typically involving happy path use cases. It works well when the person calling the demo IVR knows exactly what to say, when, and how to say it.</p>
<p>This type of demo can allow you to demonstrate the technical feasibility of a few sample use cases in a fully controlled environment. Developing such a demo requires to design and develop a simple IVR, but it does not require to plan multiple scenarios or train sophisticated NLU models. Simple static speech grammars can be used, as long as the sample scenarios are covered.</p>
<p>While useful, this type of demo can easily underwhelm the recipient: it does not provide much of a wow factor, nor does it provide any hands on experience, since the prospective client is not really allowed to try it, with or without supervision. To use sparingly.</p>
<p>&nbsp;</p>
<h3><strong>The environmentally controlled demo</strong></h3>
<p>The purpose of the controlled demo is to give your current or prospective client the opportunity to call a prototype of a conversational IVR, while making it clear which use cases or scenarios are included, and making sure you’re in the room when they call. It must be clear that the prototype is not production grade, but that it covers a large enough panorama of use cases and scenarios, including more than just happy paths.</p>
<p>This type of demo is great when you have an opportunity for more involved and detailed discussions with your client and wish to impress them with a sophisticated, well functioning demo. By trying it firsthand, your clients will get a feel for what their customers’ experience could be. By supervising the demo session, you have an opportunity to encourage your clients to try specific scenarios that you want to put forward, you can answer any questions they have and provide any technical details that they may find relevant or interesting. When properly done, this type of demo can convince a prospective client to move forward with your project.</p>
<p>This type of demo, however, requires quite a lot of work and presents some risks. To function properly, multiple scenarios must be anticipated and designed, including mixed initiative dialogues and error recovery strategies. In addition, the application must be robust to user input, which means a lot more time dedicated to developing good NLU models and speech grammars. Clients will try stuff; your demo IVR must be able to handle it.</p>
<p>&nbsp;</p>
<h3><strong>The free roaming demo</strong></h3>
<p>That’s the fearless demo that you’re ready to release and let people use without supervision, and without you breathing down their necks. Scary!</p>
<p>This type of demo is more like a pilot deployment than a demo per se. It is pretty much the last stage of a prototype before it is ready to be deployed in production. It is extremely useful as a tool to collect data from a variety of users, or to share with internal employees at your client’s site to gather feedback for optimization. When it works well, it can also be used by some of your clients to showcase to other prospective clients in their enterprise, or by your partners to show to their clients.</p>
<p>Such a demo must include a significant number of functionalities, use cases and scenarios, including mixed initiative strategies, exception paths and error recovery strategies, as well as a very robust NLU module. This is costly. And the risk of letting such a demo out in the world is also significant. If it does not work well, it may have a very negative impact on your company’s image and communication strategy, among other things. But if it does work well, it can have enormous potential.</p>
<p>&nbsp;</p>
<h3><strong>So, which one is right?</strong></h3>
<p>As you may have guessed by now, it depends. On what you want to demonstrate, to whom, in which context, on how much risk you are willing to take, and most of all, on how much time and money you can afford to spend.</p>
<ul>
<li>To manage risk, we recommend a few safety measures:</li>
<li>Have a canned demo ready in case your interactive demo doesn’t work.</li>
<li>Know your equipment and test it in advance.</li>
<li>You should not use a speaker phone, as it affects barge-in and can impact speech recognition.</li>
<li>Inform your department when you do a demo to ensure that no maintenance operation will take place during your presentation (yes, it happens!).</li>
<li>The “demo curse” really exists! If a problem occurs, keep calm and carry on, and learn how to improvise.</li>
</ul>
<p>In summary, whichever type of demo you decide to put forward, managing expectations and clearly communicating the demo’s purpose and limitations are essential to its success. You don’t get a second chance to make a good impression!</p>
<p><em>Many thanks to my colleague Jean-Philippe Gariépy for the initial idea, his review and great suggestions!</em></p><p>The post <a href="https://www.nuecho.com/how-far-can-you-go/">How Far Can You Go? Managing Expectations in Conversational IVR Demos</a> first appeared on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/how-far-can-you-go/">How Far Can You Go? Managing Expectations in Conversational IVR Demos</a> appeared first on <a href="https://www.nuecho.com">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
