• April 25, 2024

By Christopher Vitek, RPA Industry Veteran and Contact Center Specialist

Over the last five years, chatbots have morphed into Intelligent Virtual Assistants (IVAs) and Robotic Process Automation (RPA) tools have evolved from unattended (lights-out) servers to attended (human-triggered) desktop agent assistants. There are differences and synergies between these two technologies that can determine if they are right for you to use—singly or in tandem—in your contact center.

First, a point about synergies between these two technologies. There are some companies that are working on productizing the combination of IVAs and RPAs. While this sounds attractive, they will be too late to the market to make a difference. Recently, I worked on a project where we integrated a solution in which the robot “listens” to a call using AWS Connect, AWS speech-to-text, LEX (NLP), Einstein Strategies, Lightning Flow, Lightning Message Services and the Lightning interface to display Next-Best-Action recommendations and rapid execution tools to the agent during the customer interaction. We built a prototype in three weeks. Bottom line: solution providers seeking to integrate IVAs and RPAs are already too late.

I will start with some history about both of these solutions and then provide use-case examples to illustrate the differences. Further, synergies between these technologies will be discussed.

IVAs

The operative core technologies that support IVAs are Natural Language Understanding (NLU) and Natural Language Generation (NLG). Together, these components comprise Natural Language Processing (NLP). This is where AI is used to support the machine learning process regarding the dialog the chatbot uses. These are actually very old technologies dating back to 1972 when Robert Mercer and Peter Brown filed their first patents. My first NLU implementation was in 1996 when I delivered a voice-bot at Johnson & Johnson Health Care Systems. We used it to match more than 60,000 component numbers to support order assembly prior to connecting the caller with an agent in the contact center.

In the last five years, the down-draft in pricing for computers and memory have made the use of these technologies economically practical for many more use cases. Further, Amazon, Google, Microsoft and IBM are dominating the NLP industry with low prices and substantial capabilities.

NLU and NLG both use AI and machine learning. However, most IVAs focus on the use of NLU. An example would be the use of an IVA to understand a request for an account balance to trigger a recitation of data that is pulled from some sort of storage. No language generation in this case, just Text-to-Speech for IVRs and plain text for chatbots.

Typically, IVA implementations can be simplistic in terms of the complexity of the method for retrieving information, or, more importantly, performing multiple tasks to support a response. IVAs can trigger multiple data retrieval processes or multiple update processes. These methods, however, are typically hard-coded to the request (Example: AWS Lambda Functions). In RPA, data-processing tasks can use AI/ML within data-processing task execution itself. This makes RPA data-processing functions less prone to breakage due to system updates and makes them much easier to maintain.

RPA

Typically, IVAs support customer interfaces like IVRs and chatbots. Alternatively, RPA solutions are typically designed to support the agent with data-processing tasks on – or off – the agent’s computer. 

Early stage RPA deployments are specifically targeted at helping the contact center agents perform complex data-processing tasks. For example, using a software robot to retrieve the last four months of billing activity on a rented copier. In this case, the robot downloads each statement as a .csv, loads it into Excel, removes proprietary headers and columns, attaches the files to an email and sends it to the customer. Manually, this process took a little over a minute for each monthly report. With RPA it take less than 10 seconds per monthly report and can be executed in the background.

While most IVAs use AI and ML to support the enhancement of the dialog between the user and the computer, software robots use AI and ML to enhance the agent’s data-processing efforts. Specifically, the best RPA solutions use an AI-based utility called Computer Vision to interact with the agent’s PC the same way that a human does. This allows the agent to trigger complex data-processing events with a single click or voice trigger. Further, these automations can run in the foreground or background on agents’ PCs.

Computer Vision allows the RPA developer to use a record function during the development process. This works by triggering the recording session, executing the data-processing event, and then, with a single click, the robot writes the software needed to execute the process. Computer Vision uses AI and ML to manage these interactions. In other words, the robots are tolerant of interface changes due to systems upgrades and maintenance that would otherwise break an automation tool or data-processing function like an AWS Lambda. This AI-based technology can also be used to extract information from invoices, statements, interaction transcripts and/or emails that may be associated with the customer.

Synergies = Rapid Human Servicing and Zero-Touch Support Services

Typically, once an RPA solution is built and stable it can be moved into any self-service channel. The only difference is the trigger. In our copier invoice example above, the process is initially triggered by the agent. When integrated into a self-service solution, the process is triggered by the IVA.

Long-term, IVAs and RPAs will work together to support zero-touch customer interactions with customers without regard to the complexity of the request. 

For the fastest path to delivering the zero-touch environment, IVAs and RPAs should be built concurrently. Initially, each to focus on the simplest of automations while scaling the infrastructure. Scale makes it possible to implement enhanced AI/ML solutions that execute in real-time during customer interactions with self-service interfaces or human supported interactions. This may take six to 12 months, but the rewards will be substantial reductions in operating cost, elimination of errors and substantial improvements in first-contact resolution.


If you liked this article, please sign up to RPA Today!  Registrants will receive our free weekly RPA newsletter updating you on the most recent developments in the Robotic Process Automation, Intelligent Automation and AI space. In addition to news updates, we will also provide feature articles (like this one) with a more in-depth examination of RPA issues for end users and their enterprises.