<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Content - AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</title>
	<atom:link href="https://www.nuecho.com/fr/category/content-fr/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.nuecho.com/fr/category/content-fr/</link>
	<description>Nu Echo</description>
	<lastBuildDate>Wed, 22 Sep 2021 17:12:02 +0000</lastBuildDate>
	<language>fr-FR</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Retour sur le Nuance Partner eXperience Summit: Une transformation accélérée dans un marché fluide (Article en Anglais)</title>
		<link>https://www.nuecho.com/fr/retour-sur-le-nuance-partner-experience-summit-une-transformation-acceleree-dans-un-marche-fluide/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=retour-sur-le-nuance-partner-experience-summit-une-transformation-acceleree-dans-un-marche-fluide</link>
		
		<dc:creator><![CDATA[Yves Normandin]]></dc:creator>
		<pubDate>Tue, 03 Mar 2020 15:18:04 +0000</pubDate>
				<category><![CDATA[Blogue]]></category>
		<category><![CDATA[Content]]></category>
		<category><![CDATA[RVI]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/retour-sur-le-nuance-partner-experience-summit-une-transformation-acceleree-dans-un-marche-fluide/</guid>

					<description><![CDATA[<p>Since Mark Benjamin joined Nuance as its new CEO almost two years ago, the company has been going through a breathtaking transformation. After selling its imaging division to Kofax and spinning off its automotive division, the company now focuses primarily on its core business of providing conversational AI products and solutions. Even in its core [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/fr/retour-sur-le-nuance-partner-experience-summit-une-transformation-acceleree-dans-un-marche-fluide/">Retour sur le Nuance Partner eXperience Summit: Une transformation accélérée dans un marché fluide (Article en Anglais)</a> first appeared on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/fr/retour-sur-le-nuance-partner-experience-summit-une-transformation-acceleree-dans-un-marche-fluide/">Retour sur le Nuance Partner eXperience Summit: Une transformation accélérée dans un marché fluide (Article en Anglais)</a> appeared first on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">Since Mark Benjamin joined Nuance as its new CEO almost two years ago, the company has been going through a breathtaking transformation. After selling its imaging division to Kofax and spinning off its automotive division, the company now focuses primarily on its core business of providing conversational AI products and solutions.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">Even in its core conversational AI business, <a target="_blank" href="https://www.nuance.com" rel="noopener noreferrer">Nuance</a> is fast transforming itself from a software company with a very large focus on professional services to a product and platform company. This was evident at the 2019 Partner eXperience Summit, but it was even more so at this year’s event.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">This change was of course necessary and, I might add, a bit overdue. Long the dominant vendor of enterprise speech technology solutions, Nuance is now being challenged by companies – <a target="_blank" href="https://about.google" rel="noopener noreferrer">Google</a>, <a target="_blank" href="https://www.amazon.ca" rel="noopener noreferrer">Amazon</a>, <a target="_blank" href="https://www.microsoft.com/en-ca/" rel="noopener">Microsoft</a>, and <a target="_blank" href="https://www.ibm.com/ca-en" rel="noopener noreferrer">IBM </a>among others – that offer easy to use conversational AI platforms with state-of-the-art technologies. With these platforms, the claim is that anybody can now develop sophisticated conversational AI solutions; that speech recognition (ASR) and natural language understanding (NLU) work “out-of-the-box” without any need for speech scientists; and that, in fact, you don’t even need developers to build solutions. This is the “do-it-yourself” (DIY) message and it is a compelling one.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">Of course, that message is highly misleading. Yes, to some extent, the technology now works “out-of-the-box” in the sense that it is possible to get a simple conversational demo bot up-and-running quickly. With speech-to-text (STT) engines, there is no need to write speech recognition grammars and NLU engines can be trained with a few training phrases per intent. But that’s only good for a demo. Building an effective, enterprise-grade conversational AI system is hard work, no matter what the platform is (more on that in a future blog post).</span></p>
<h1></h1>
<p><span style="font-weight: 400;">What is true, though, is that enterprises really are looking for DIY tools. And they are increasingly demanding cloud-native solutions. And, above all, they want flexibility. And Nuance has heard that message loud and clear. They now understand that it’s no longer sufficient to have best-in-class technology and a good professional services organization. Customers want to have flexible development and deployment models.</span></p>
<h1></h1>
<p>The most recent big steps that Nuance has taken in that direction are:</p>
<ul>
<li>Conversational AI APIs (launched November 2019);</li>
<li>The Nuance GateKeeper cloud based security and biometrics suite (launched October 2019);</li>
<li>Nuance Mix: DIY Tooling for partners and end users (general availability planned for end of March)</li>
</ul>
<p><span style="font-weight: 400;">The introduction of Nuance Mix, in particular, is a big change for a company that is used to directly delivering most of its conversational AI solutions through its professional services organization, using closely guarded development tools. But what we’ve seen so far of Mix is promising, with a slick, contemporary user interface. From a company that has years of experience building and deploying compelling conversational AI solutions, this is quite encouraging.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">Nuance is facing powerful new competitors, but it has many advantages. Its technology is top-notch, it has a very large installed base, it offers the most flexible deployment models (premise or cloud), its technology is integrated with most contact center platforms, and it understands better than anybody what it takes to deliver conversational experiences that work not just in demos, but in the real world. Nuance also offers the most extensive capabilities to adapt and optimize the technology for a specific domain and a specific dialog state, which is often what makes the difference between a good demo and an enterprise-grade solution.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">Another Nuance differentiator – which they position as a key element of their value proposition – is its strong professional services organization. But that could also turn out to be its Achilles&rsquo; heel, because customers no longer want to be dependent on the vendor’s PS; they want to know that there is a large pool of people that are skilled on the technology and have all the tools necessary. It will be a challenge to change a company that is culturally used to delivering all the big projects into one that enables its partners and customers to do it themselves.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">In conclusion, Nuance is clearly going in the right direction and making all the right moves, but its plan is ambitious, so execution will be key. Perhaps the biggest challenge will be to implement the culture changes that are required in order to successfully implement this transformation.</span></p>
<h1></h1>
<p><span style="font-weight: 400;">We’ve been in this market for close to 20 years and these are by far the most interesting times we’ve seen. We’re expecting quite a ride in the next few years.</span></p><p>The post <a href="https://www.nuecho.com/fr/retour-sur-le-nuance-partner-experience-summit-une-transformation-acceleree-dans-un-marche-fluide/">Retour sur le Nuance Partner eXperience Summit: Une transformation accélérée dans un marché fluide (Article en Anglais)</a> first appeared on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/fr/retour-sur-le-nuance-partner-experience-summit-une-transformation-acceleree-dans-un-marche-fluide/">Retour sur le Nuance Partner eXperience Summit: Une transformation accélérée dans un marché fluide (Article en Anglais)</a> appeared first on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Pour que chaque appel compte (Article en anglais)</title>
		<link>https://www.nuecho.com/fr/making-every-call-count/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=making-every-call-count</link>
		
		<dc:creator><![CDATA[Linda Thibault]]></dc:creator>
		<pubDate>Thu, 31 Oct 2019 18:00:43 +0000</pubDate>
				<category><![CDATA[Blogue]]></category>
		<category><![CDATA[Content]]></category>
		<category><![CDATA[RVI]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=5991</guid>

					<description><![CDATA[<p>The ROI of a Natural Language Call Steering IVR Contact centers are still a central component of medium and large enterprises, and an essential channel for customers to connect with businesses. For a contact center to reach its full potential, it needs to meet the following business objectives: Customer service: provide customer support, deliver the [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/fr/making-every-call-count/">Pour que chaque appel compte (Article en anglais)</a> first appeared on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/fr/making-every-call-count/">Pour que chaque appel compte (Article en anglais)</a> appeared first on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>The ROI of a Natural Language Call Steering IVR</h1>
<p>Contact centers are still a central component of medium and large enterprises, and an essential channel for customers to connect with businesses. For a contact center to reach its full potential, it needs to meet the following <a href="https://smallbusiness.chron.com/business-objectives-call-center-66358.html" target="_blank" rel="noopener">business objectives</a>:</p>
<ul>
<li><strong>Customer service</strong>: provide customer support, deliver the best possible experience and generate positive word of mouth</li>
<li><strong>Customer retention</strong>: retain existing customers and boost customer loyalty</li>
<li><strong>Operational efficiency </strong>: by optimizing average interaction and resolution time</li>
</ul>
<p>Contrary to popular belief, the <a href="https://financesonline.com/call-center-statistics/#channels" target="_blank" rel="noopener">telephone is still a widely used tool</a> to connect with enterprises. While customers appreciate being able to self-serve and perform transactions on their own—whether on their mobile phone or on the web—they still turn to contact centers for assistance when they have more complex issues or are having a hard time with self-service. In both cases, the customer is likely to call your company looking to <a href="https://hbr.org/2017/07/your-customers-still-want-to-talk-to-a-human-being" target="_blank" rel="noopener">speak with a real person</a>. And when they do, you want to be sure you make a strong impression!</p>
<p>Many enterprises today still rely on traditional touch-tone IVRs for call routing and to establish customer intent. But touch-tone menus are often complex, with multiple steps and options, confusing terminology and long wait times. Frustrated with the experience, customers end up hitting zero or selecting an incorrect option (any option!) hoping to reach an agent. This results in a high volume of misrouted calls, with customers having to repeat themselves and agents wasting valuable time talking to customers they can’t help.</p>
<blockquote><p><em>These frustrating experiences, for both the caller </em><em>and the agent, come at a cost: <a href="https://www.newvoicemedia.com/en-us/news/the-62-billion-customer-service-scared-away" target="_blank" rel="noopener">$62 billion in the U.S. alone in 2016.</a></em></p></blockquote>
<p>So how can you improve your customer experience in the IVR while optimizing your agent resources? For starters, by asking customers why they’re calling, instead of having them go through a bunch of menu options trying to get their issue resolved.</p>
<p>Natural Language Call Steering (NLCS) IVRs use natural language understanding and automatic speech recognition to listen to callers, interpret what they’re saying and understand what they need.</p>
<p>By precisely categorizing the caller’s request, the NLCS can route the caller to the right agent on the very first try, most times in a single interaction. This serves to reduce the number of transfers and means callers don’t have to repeat themselves. Not only that, but the IVR can transfer the information it collects directly to the agent, giving them a head’s up as to why the customer is calling.</p>
<p>Making the transition from a touch-tone IVR to an NLCS solution is an investment, but one that will generate great returns. And sticking with the alternative can be extremely costly for companies in the long term, resulting in:</p>
<ul>
<li><strong>High call transfer rates</strong>, a costly issue that wastes valuable agent resources</li>
<li><strong>Customer frustration</strong>, which again wastes agent resources (with customers spending more time expressing their frustration than discussing their needs) and leads to low agent morale/high employee turnover</li>
<li><strong>Poor customer retention</strong>, with existing customers often leaving the company after a single negative experience</li>
<li><strong>Loss of potential customers</strong> if the IVR experience is negative and you fail to make a good first impression</li>
</ul>
<p>NLCS solutions result in fewer misrouted calls, reduced route times, fewer abandoned calls and better agent utilization—so you can improve your customer experience and <a href="https://www.carahsoft.com/application/files/7315/3235/1731/NUAN-CS-1104-01-DS_Call_Steering_r1.pdf" target="_blank" rel="noopener">reduce your operational costs</a>. And the benefits don’t stop there. NLCS IVRs give you the opportunity to:</p>
<ul>
<li><strong>Identify new self-service opportunities</strong>, by undersanding why people are calling, we can identify opportunities to address their issue right in the IVR, therefore greatly reducing the volume of calls into the contact center</li>
<li><strong>Offer simplified access to your services,</strong> by consolidating all your services in a single phone number</li>
<li><strong>Gather key data with analytics,</strong> giving you better insights into your customers, products and services</li>
</ul>
<p>NLCS IVR solutions are a game changer, leveraging the latest conversational technologies to deliver an intuitive, human-like caller experience. Bring your voice channel into the future to improve your customer experience, boost customer loyalty, optimize your agent resources—and make every call count.</p>
<p><em>Many thanks to my colleague Annie Brasseur for her invaluable contribution to this post, and to Yves Normandin for his insightful comments.</em></p>
<p><em>Download our <a href="https://www.nuecho.com/wp-content/uploads/2019/10/one-pager.pdf" target="_blank" rel="attachment noopener wp-att-4406">one pager</a>!</em></p><p>The post <a href="https://www.nuecho.com/fr/making-every-call-count/">Pour que chaque appel compte (Article en anglais)</a> first appeared on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/fr/making-every-call-count/">Pour que chaque appel compte (Article en anglais)</a> appeared first on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Rasa Summit, Chatbot Conference, etc. : ce qu’il faut retenir (Article en anglais)</title>
		<link>https://www.nuecho.com/fr/rasa-summit-chatbot-conference-chatbots-voicebots-voice-assistants/#utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=rasa-summit-chatbot-conference-chatbots-voicebots-voice-assistants</link>
		
		<dc:creator><![CDATA[Karine Dery]]></dc:creator>
		<pubDate>Tue, 29 Oct 2019 18:00:05 +0000</pubDate>
				<category><![CDATA[Blogue]]></category>
		<category><![CDATA[Content]]></category>
		<category><![CDATA[Event]]></category>
		<guid isPermaLink="false">https://zux.zsm.mybluehost.me/majoctobre2019/?p=5987</guid>

					<description><![CDATA[<p>A couple of weeks ago, I had the opportunity to attend Bot Week in San Francisco. In addition to the main events &#8211; Rasa Summit and Chatbot Conference &#8211; I attended every event of the week to make the most of my stay in this innovative city, and oh! it was worth it. Not only [&#8230;]</p>
<p>The post <a href="https://www.nuecho.com/fr/rasa-summit-chatbot-conference-chatbots-voicebots-voice-assistants/">Rasa Summit, Chatbot Conference, etc. : ce qu’il faut retenir (Article en anglais)</a> first appeared on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
<p>The post <a href="https://www.nuecho.com/fr/rasa-summit-chatbot-conference-chatbots-voicebots-voice-assistants/">Rasa Summit, Chatbot Conference, etc. : ce qu’il faut retenir (Article en anglais)</a> appeared first on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>A couple of weeks ago, I had the opportunity to attend <a target="_blank" href="https://www.chatbotconference.com/bot-week" rel="noopener">Bot Week</a> in San Francisco. In addition to the main events &#8211; <a target="_blank" href="https://rasa.com/summit/" rel="noopener">Rasa Summit</a> and <a target="_blank" href="https://www.chatbotconference.com/" rel="noopener">Chatbot Conference</a> &#8211; I attended every event of the week to make the most of my stay in this innovative city, and oh! it was worth it. Not only did I learn a lot but I also met many interesting and interested people, important actors of the bots (voice and chat) ecosystem, and heard about exciting use cases and technologies. This full immersion gave me a renewed perspective on what has been done in this area and what is left to explore, and I will try, through this blogpost, to give you a glimpse of what I learned.</p>
<p>&nbsp;</p>
<h1>The Vision</h1>
<p>Like an evening star, leading innovators on the path to the new era of chatbots and voicebots, is a Vision, a Vision slowly leaving sci-fi movies to enter our reality: the omnipotent personal virtual assistant (OPVA). Imagine having your own OPVA. Or let’s call it Jarvis, like Iron Man’s. Imagine having your own Jarvis (iron suit sold separately). Jarvis is with you everywhere; Jarvis is your own personal vocal Google search; Jarvis starts your coffee pot 10 minutes before you wake up; Jarvis even reschedules your dentist appointment behind your back, because you unknowingly booked your camping trip the same week. This is the <em>Vision</em>.</p>
<p>Multiple speakers talked about the Vision, and/or the path to it. This path is generally represented as 5 levels of AI assistants. For more information, you can read Rasa’s CEO Alex Weidauer’s take on <a target="_blank" href="https://blog.rasa.com/conversational-ai-your-guide-to-five-levels-of-ai-assistants-in-enterprise/" rel="noopener">the 5 levels from an enterprise point of view</a>, or for a summary, these equivalences with some of Jarvis’s skills:</p>
<ol>
<li>Notification Assistant: Does not support user input, only sends messages<br /> <strong>Jarvis</strong>: The external temperature outside is 1,000 °C, this might become dangerous for your suit.</li>
<li>FAQ Assistant: One-step interactions, answers generic questions:<br /> <strong>Tony</strong>: What’s iron’s melting point?<br /> <strong>J</strong>: 1,538 °C</li>
<li>Contextual Assistant: Answers contextual questions if context is explicitly given:<br /> <strong>T</strong>: Can you send a message to Pepper?<br /> <strong>J</strong>: Sure. What is the message?<br /> <strong>T</strong>: “I will be late for dinner due to some complications, love you.”<br /> <strong>J</strong>: Got it.</li>
<li>Personalized Assistant: Knows the user, their preferences, has, or appears to have, some form of understanding of the user’s world:<br /> <strong>T</strong>: Can you notify my wife I might be late due to some complications?<br /> <strong>J</strong>: Sure, I will let Pepper know you will not be with her for dinner as expected.</li>
<li>Autonomously Organized Assistants: Services are interconnected and user does not need to intervene:<br /> <strong>J</strong>: Your blood pressure is dropping. May I suggest you head to the nearest hospital?<br /> <strong>T</strong>: I’m okay, I just need to&#8230;<br /> <strong>T</strong>: <em>Faints</em><br /> <strong>J</strong>: Sir? I didn’t understand. <em>(pause)</em> Your vital signs indicate you might have lost consciousness, I will bring you to the hospital if you do not explicitly cancel.<br /> <strong>J</strong>:<em> Starts auto-pilot to the nearest hospital, notifies Pepper and also notifies the hospital of the incoming patient</em>.</li>
</ol>
<h1></h1>
<p>&nbsp;</p>
<h1>Current Jarvis or Where Are Bots Now?</h1>
<p><img decoding="async" class="wp-image-4339 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/10/bot.jpg" alt="" width="797" height="558" /></p>
<p>I remember, a couple years ago, all these “Build a Bot in 10 Minutes” blogs and tutorials, and how every dialogue engine was sold as the easiest and fastest way to create a chatbot. Many were trying to sell their own cheap version of this fashionable new toy.</p>
<p>I was more than happy to find that no one sells this idea anymore. The ideal chatbot shifted from easy-built to personalized, efficient and conversational, as attested by the <a target="_blank" href="https://www.businesswire.com/news/home/20190528005646/en/Bank-America%E2%80%99s-Erica%C2%AE-Completes-50-Million-Client" rel="noopener">hype around Erica</a>, (which, as a Canadian, I did not really hear about before the conference). Bank of America’s (large) team spent months working on it, and are still tuning it and enriching its vocabulary and skills. Pretty far from one person building a chatbot while making a deposit&#8230; Not only is it accepted that a bot needs a significant amount of thought and work beforehand, but also that it needs attention afterwards, using analytics and new user data for continuous improvement. Thus, the market has evolved, and lots of new companies emerged in the last couple years, offering tools and expertise to facilitate this continuous work.</p>
<p>Here are those who stood out the most by their strong presence during the week:</p>
<ol>
<li>Design tools: <a target="_blank" href="https://botmock.com/" rel="noopener">BotMock</a> and <a target="_blank" href="https://botsociety.io/" rel="noopener">BotSociety</a></li>
<li>Area-specific building tools: <a target="_blank" href="https://smartloop.ai/" rel="noopener">Smartloop</a> for leads and sales</li>
</ol>
<p>N.B.: For a more exhaustive list, refer to the agendas of the events.</p>
<p>Special mention &#8211; <a target="_blank" href="https://robocopy.io/" rel="noopener">Robocopy</a>: The emergence of conversational bots in the last few years gave birth to the <em>Conversation Designer</em> job title. Many of those who wear this title are former UI designers, copywriters or linguists, and until now, the related knowledge was sparse in bot design tools guidelines or blog posts. I think Robocopy’s <a target="_blank" href="https://conversationalacademy.com/" rel="noopener">Conversational Academy</a> arrival marks a milestone in this field; it is becoming an area of expertise in itself, more and more defined every day. I can’t judge the quality of their courses based only on the fascinating talk of their co-founder, Hans Van Damm, but putting this knowledge together can only be a push in the right direction.</p>
<h3>On the Conversational Aspect</h3>
<p>But to create a bot, technology needs to support the design. According to Alex Weidauer, technology has allowed to create efficient question-answering bots (level 2) for a few years (still not a ten minutes job though, training the natural language understanding (NLU) model and handling exceptions seamlessly demands work), and now allows level 3 bots, i.e. contextual assistants/bots. The next step would be achieving level 4 (other special mention to <a target="_blank" href="https://aigo.ai/" rel="noopener">Aigo</a> who seem to have accomplished it for the daily tasks of a home assistant).</p>
<h1></h1>
<h1></h1>
<h1></h1>
<h1></h1>
<h1>Upcoming Jarvis or What’s Coming Up Next?</h1>
<h3></h3>
<h3></h3>
<h3>RCS</h3>
<p>The first talk at Chatbot Conference was Sean Badge from Google on Rich Communication Services (RCS), an overdue rich-content protocol that is slowly replacing SMS. It is a step towards integrated enterprise assistants, allowing them to connect with the user on one network, without forcing them to install separate apps.</p>
<h3></h3>
<h3>5G and Edge Computing</h3>
<p><em>At Mobile Monday’s Future of Voice and Smart Speakers</em>, discussions revolved around how cloud computing is slowing down assistants and preventing voicebot conversations to feel natural because of network latency. Imagine talking to one of your friends on the phone, and each time you stop talking, there’s a 1 second silence before they answer normally. You would wonder if your friend was one of the first victims of a robot takeover. In the same way, when virtual assistants do this, it only reminds us that it is not a human on the line.</p>
<p>Edge computing, i.e. distributed computing near where it is needed, is probably the solution to this annoying latency, and 5G, allowing to connect more devices together and being faster, makes it closer than it ever was. Voicebots could eventually be more like that friend who starts talking before the end of your sentence because they can predict the last words. The polite version.</p>
<h3></h3>
<h3>The Rasa Experience</h3>
<p><img decoding="async" class="wp-image-4341  alignright" src="https://www.nuecho.com/wp-content/uploads/2019/10/rasa.jpg" alt="" width="143" height="171" />As we are trying to make AI assistants more conversational and conversations more human-like, Rasa, as a dialogue engine, stands out as a promising technology for two reasons:</p>
<ol>
<li>The use of machine learning (ML) on the conversational level (and not only NLU)</li>
<li>Their open-source codebase</li>
</ol>
<p>We have been happily using Rasa for several months now, so the first advantage was already obvious to me: ML probably holds the key to machines acting like humans in a variety of contexts, since hard-coding every single reaction would be a colossal task, if not impossible. Consequently, Rasa being ML’s advocate in conversation management, it has an edge its competitors do not. But it is only by attending the Rasa Summit that I could appreciate the advantages of the second point. A self-evident one is that open source means easy customization. It also means on-premise deployment, which is a plus for organizations managing sensitive user data like banks, insurance or health care providers, three of the biggest owners of customer service chatbots (at least in the USA). And last but not least, a refreshing community feel exhales from Rasa events, because they put a significant emphasis on community and value their contributors. They can retain people and enterprises, make them contribute joyfully, bring new ideas and technology, while aligning their product vision/roadmap with community requirements.</p>
<h3></h3>
<h3>About Voice</h3>
<p><img decoding="async" class="wp-image-4343 size-full aligncenter" src="https://www.nuecho.com/wp-content/uploads/2019/10/wave.jpg" alt="" width="789" height="191" /></p>
<p>Working for a company that has been bringing « conversational » and IVR together for years, I could not ignore how voice channels were discussed at these conferences. They did have a significant, but not central, place in Bot Week, and it’s logic: how odd and inefficient would Jarvis be, if only available by chat? The more bots become conversational, the less we can ignore that language starts with voice, and that for this same reason, voice assistant usage rises.</p>
<p>It is generally accepted that designing a voicebot is different from designing a chatbot because of the limited content that can be sent back. However, I noticed that bot developers, me included, tend to forget something important, a fact expressed simply by Emily Lonetto from VoiceFlow at <em>Slack’s Building the Bots of the Future</em> event: voice might be the easiest, fastest and most portable channel to ask for things, but often not the good one to receive them. Indeed, for a single piece of information, you would expect Jarvis to answer verbally, but for a full report, you would expect a whole interactive 3D hologram (equivalent to an email or pile of paper from a real human assistant).</p>
<p>I think that this idea of a distinct output channel tends to be left behind for two reasons:</p>
<ol>
<li>In some voice channels or for some users who do not have the appropriate device (an Echo Show with Alexa for example), a visual output might be impossible.</li>
<li>The idea of designing one bot, with the same flow, the same NLU model for all channels, with only the need to adapt the response, is tempting. While most bot-building platforms are designed with this workflow in mind, this over-simplification limits the possibility to send an output on a second channel.</li>
</ol>
<p>Another cause of this simplification is probably that voice assistants’s Speech-to-text (STT) algorithms are unaware of the NLU model. Surprisingly, no one mentioned the problem of this approach, which seems unavoidable to me. I will illustrate it with a true example that happened to me a few months ago while testing a bot over voice with such system.</p>
<p>Context: I was testing a banking app, and was asked if I wanted to make a recurring or a one-time payment and answered “one time”. I could see the intermediary STT results of my audio stream, and here’s what I got:</p>
<ul>
<li style="text-align: left;"><em>One   </em>                    (I am not finished talking yet)</li>
<li style="text-align: left;"><em>One time </em>             (Cool it works)</li>
<li style="text-align: left;"><em>One time </em>             (It is waiting for me to say something else i guess&#8230;)</li>
<li style="text-align: left;"><em>Fun time</em>              (Final result. Wait what?)</li>
</ul>
<p>Obviously, my dialogue flow fell into error state. The correct hypothesis was not chosen (and even replaced!) because the speech recognition model was unaware of the kind of answer it should have been expecting. STT technology sure is getting better and better at eliminating noise, understanding accents and using the user’s history, his location or other contextual information, but user specific information is not always available, e.g. in a phone call. Moreover, in this situation, the sound quality can be far behind the quality a voice assistant can get because of many factors (low bandwidth, low resolution, microphone, etc.), which multiplies the risks of an incorrect transcription.</p>
<p>Maybe in an innovative town like San Francisco, people do not talk about an “aging” medium like telephony, but we work with IVR systems everyday, and know that large call centers are still a reality for many organizations, and will continue to be for years to come. With cell phones being so omnipresent, the phone remains the easiest means of communication for urgent situations such as calling the insurance company after a car crash.</p>
<p>It turns out that in this IVR universe, for the aforementioned reasons, technologies like VoiceXML did and still close the gap between speech recognition and NLU. They should not be overlooked as they can be used to bring the newer chatbot technology to legacy call center installations (as we did with Rasa and <a target="_blank" href="https://www.nuecho.com/news-events/developing-conversational-ivr-using-rasa-part-2-the-rivr-bridge/" rel="noopener">the Rivr Bridge</a>). Then one day, with technological advancements like Dialogflow’s <a target="_blank" href="https://cloud.google.com/dialogflow/docs/speech-adaptation" rel="noopener">Auto speech adaptation</a>, speech recognition, visual recognition, language understanding and conversation management will all work hand in hand in constant communication in Jarvis’s circuits, as it happens in our own brains.</p><p>The post <a href="https://www.nuecho.com/fr/rasa-summit-chatbot-conference-chatbots-voicebots-voice-assistants/">Rasa Summit, Chatbot Conference, etc. : ce qu’il faut retenir (Article en anglais)</a> first appeared on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p><p>The post <a href="https://www.nuecho.com/fr/rasa-summit-chatbot-conference-chatbots-voicebots-voice-assistants/">Rasa Summit, Chatbot Conference, etc. : ce qu’il faut retenir (Article en anglais)</a> appeared first on <a href="https://www.nuecho.com/fr/">AI Virtual Voice Experts with Google Dialogflow CX - CCAI - Nu Echo</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
