AI Should Help You “Do” Things

June 21, 2023

Generative AI possesses great abilities to craft impressive content. Its capacities in generating text, photos, and videos, and engaging in conversational interactions are remarkable, yet they represent only one step in a customer’s or employee’s quest to complete a process. Imagine a customer reaching out to an organization for assistance. Generative AI can answer this person if its model supports the request or it can suggest another method. However, its true potential can only be exploited when it transcends conversational limitations and takes definitive actions to help you actually “do” something.

We must view AI not just as an intelligent service that generates content, but as an operative tool that propels actions and leads to substantial business outcomes. AI should be more than an impressive chatbot; it must guide your customers and employees based on the next best actions for the outcome they are seeking. This is a fundamental concept in the realm of AI: operationalization and optimization. And, this AI shouldn’t merely exist in the realm of conversations but rather connect to real-time systems, actively engage with multiple people, and most importantly, facilitate actions that drive positive business outcomes.

This journey of operationalizing AI starts with a simple, yet important, concept. Eventually, you will want customers and employees to have one-on-one, one-to-many, or many-to-many conversations with your real-time systems. Doing so provides them the capability to complete an outcome by themselves. In essence, what we are striving for is an AI automation system that does more than just chatter—it must “do” things. Chris and I delve into this intriguing aspect of AI in this second of three episodes, breaking down AI complexities and understanding its capacity to “do” things, which ultimately leads to measurable outcomes.

Links and Resources

Speakers

Scott King

Scott King

Chief Marketer @ Krista

Chris Kraus

VP Product @ Krista

Transcription

Scott King:

Hello, I’m Scott King, and with me is Chris Kraus. Kraus, how are you?

Chris Kraus:

I’m doing well. 

Scott King:

Excellent. Today is our second episode in a three-part series on integrating generative AI, like ChatGPT, Bard, or WatsonX, into your business. In the previous episode, Chris, we discussed the integration of these systems. Today we’re focusing on the actionable outcomes of such integration. These systems should assist employees, automate parts of their jobs, speed up processes, and potentially act as virtual assistants. That’s a popular use case. But transitioning from our last episode, where we discussed integration, to this episode where we focus on actionable outcomes, help us understand when you would be at this point in a project. We’re not discussing an extensive timeframe, are we?

Chris Kraus:

No, we’re not. If you’re considering adding generative AI to provide a more human-like and natural response, there’s usually a consequential action. It doesn’t mean throwing your hands up in the air and saying we’ve provided the information, and now we have to move to a whole new application for RMA returns and getting labels. You want to link these two aspects. 

For instance, let’s say the AI informs a customer their item is eligible for a return. Immediately, in that same conversation, you would want the AI to ask the customer if they’d like to initiate the return process. To do this, the AI would need additional information such as their name, serial number, and proof of purchase. Once these details are provided, the AI can validate the address, generate an RMA number, create the shipping label, and complete the process. 

The aim is to create a seamless, end-to-end experience. Instead of just providing information, the AI takes the next step and automates the five crucial steps to close the return process. In doing so, it offers a more contextual and satisfying user experience. We want the AI to understand the context and provide the appropriate action. 

Scott King:

Exactly.

Chris Kraus:

In a call center, this might be referred to as the next best action. You want to apply this principle to your generative AI and chatbots.

Scott King:

When speaking with customers, prospects, and partners, it seems clear to us that the chat experience should provide this level of guidance. But where do you think people get stuck? They already employ personnel who essentially do exactly what you describe in call or contact centers.

Chris Kraus:

People have struggled with call deflection for years, trying to transfer a customer from an agent to an automated telephone system, or to a chatbot that simply looks up FAQs. However, what these chatbots traditionally couldn’t do was look up data in real systems and guide a customer through a multi-step workflow, while retaining context if a transfer to an agent was needed midway. 

Most chatbots don’t retain the history of the conversation, the actions taken, or the workflow completed. That’s where the challenge lies for people – understanding that this technology now exists. It allows for complete lifecycle management for the customer, allowing them to interact with the system at their own convenience, not restricted by traditional business hours or time zones.

Scott King:

Indeed. Let’s continue with the recall example. Recalls can happen years after the proof of purchase. For instance, I recently looked up a faulty gauge in my old truck, which is 15 years old. These chat systems and generative AI should support long-running conversations, potentially spanning years. Let’s discuss how integrated systems that understand the context and workflow could facilitate this. How about using employee onboarding as an example, a popular query among our audience?

Chris Kraus:

The first example was straightforward – accomplish the task and you’re done. For long-running tasks, when the user comes back, we should be able to update them with the status of their request automatically. For instance, we could confirm whether we’ve shipped their parts, if the parts have been delivered, or if they managed to install the parts. This kind of continuous status update is important in long-running processes.

A more complex example is employee onboarding, which involves not just time but multiple parts of an organization and different roles. Once an employee is hired, HR starts the onboarding process. This could include signing up for security training, getting hardware, receiving a global ID and email, and being added to the HR system and organization chart. This process may involve multiple steps across different departments. Some steps are automated, while others need human intervention.

An AI can aid in managing and organizing this process, providing intelligent responses and helping answer questions along the way. There will be checkpoints, orchestration of tasks, and AI’s involvement in enhancing this process.

Next year, employees may have to repeat some of these steps for compliance purposes, like security training. A system should automate this process and provide necessary updates without employees having to log into a system of record.

Scott King:

Absolutely. I remember we did a webinar before using the onboarding example, with one of the project management tools, either Trello, monday.com, or Asana. They provided a complex, hundred-step template for employee onboarding. It seemed ripe for automation.

Chris Kraus:

I recently worked with one of our consulting partners who specialize in business process innovation. They had a process for a financial institution that was 500 steps long. They printed it, and it was six feet long on the wall. It’s amazing to see how complex some of these processes can be and the potential for automation.

Scott King:

Given that IT organizations are already overwhelmed with massive backlogs, integrating generative AI into systems could potentially overload them. How can they handle this increased demand?

Chris Kraus:

Your initial question about why people don’t use AI to make chatbots more intelligent and complete processes, you’ve just answered it yourself. The fear of burdening the IT team with yet another complex project often stops people from adopting AI solutions. Creating a chatbot involves creating a project plan, involving architects, making the chatbot multi-lingual, and defining all the requirements. This process can take around six months.

However, there is software available to simplify this process, thanks to a strong architecture behind AI solutions, such as an AI iPaaS. This kind of platform understands how to connect to various AI services, and can reliably integrate backend systems via APIs to get and send data from and to different parts of an organization. 

There’s undoubtedly some development work involved here, especially regarding IT governance and security. It’s important to ensure that these APIs are secure and role-based. However, for the middle layer – the part where we’re interacting with the user or writing the script for the chatbot – we want business people to handle this. They’re more familiar with business processes and can adapt them according to the organization’s needs or regulatory compliance reasons.

These business processes change more often than APIs, due to external pressures such as customer demands, changes in business models, competition, or government regulations. By allowing the business to control the process, and using existing APIs, you can increase your speed of operation.

The final layer involves communicating with the end user. We want to have one version for all – whether it’s for a mobile device, a browser, or a pop-up chatbot – using natural language processing and understanding to interact conversationally. This solves all the UI problems and gives IT control over the platform.

So, instead of spending nine months building a system with a large team, you can use a platform that provides proper separation, integration bus to access data, and the ability to model the conversation using natural language processing at every step along the way. You don’t need to build a form, but instead make it conversational, allowing it to handle user responses like ‘Monday’ or ‘Tuesday’ automatically.

Scott King:

Given the past decade’s shadow IT problem and the increasing adoption of SaaS products by businesses, there’s a new challenge. Businesses are aware of the potential of generative AI platforms and want to modify workflows faster than IT can. Yet, even though every business is increasingly becoming a software business, the development can’t keep pace.

Chris Kraus:

Absolutely. The line of business comprehends the urgency and need for agility. Traditional low code development may build forms and features quickly, but updating them is challenging. The process needs to be optimized for change as businesses will need continuous adjustments for agility. This focus should not be on the speed of initial construction but on the ease of maintenance and adaptability.

Scott King:

Dealing with the existing technical debt while limiting future debt is a significant challenge. Most of the application costs come after its launch due to necessary changes. The core competency for many IT groups is increasingly about how fast they can adapt and change.

Chris Kraus:

Indeed, the traditional concept of moving from building to a “hardening phase,” where changes are locked, can’t apply in modern times. Today’s projects are ongoing, requiring continuous adaptation and change. This calls for a different software development lifecycle mindset to stay reactive to business changes.

Scott King:

Well, thanks, Chris, for discussing these different enterprise requirements for generative AI. Our future episode will be about data governance and security risks, which you’ve touched on earlier. We’ll talk about managing the role-based access of these systems. Thanks again, and until next time.

Our 2025 AI Buyer's Guide is Now Available

Close Bitnami banner
Bitnami