Competing for the Future

April 12, 2023

Enterprise uses for artificial intelligence continue to expand. Companies are now using AI to automate processes in customer service, human resources, IT, and supply chain to make more informed decisions to improve profits and customer service.

But, is AI necessary?

Companies like Krista, IBM, and your competitors think so.

I recently had a conversation with Manish Sampat about his experience working with IBM Watson. Manish is the Global Sales Leader for IBM Watson and he shared with me how AI can be used to enhance processes, and customer service, what ethical considerations need to be taken into account when implementing AI, and how we can upskill our people to build better AI-led processes.

Key Points:

  • IBM is committed to helping workers transition into the new normal with skills training and global initiatives for AI development.
  • Humans will remain a critical part of the decision-making process; AI is here to assist us in our tasks, not to replace us. 
  • Executives should act quickly, have a strategy in place, and make sure data is available to adopt AI and remain competitive.
  • CIO’s office, data science team, chief customer officer, chief marketing officer, and chief digital officer should all work together when overseeing AI adoption. 
  • A variety of departments within a company should collaborate effectively to ensure the ethical use of AI.

How does Watson’s AI technology work and what makes it unique?

IBM Watson stands out as a versatile AI brand encompassing a range of capabilities from speech processing and conversational AI to advanced text analytics and intelligent document processing, all underpinned by market-leading natural language processing. Watson’s uniqueness lies in its recognition by Gartner as a leader in conversational AI and insight engines, its ability to be customized and trained for specific domains without compromising data ownership, and its user-friendly low-code or no-code interfaces that foster continuous learning. Furthermore, Watson offers flexibility in deployment, allowing customers to run services on IBM cloud, on-premise data centers, or even on hyperscalers, catering to various data privacy concerns and preferences. These key differentiators contribute to Watson’s success in the market.

How has the generative AI buzz affected the IBM Watson group?

Generative AI and ChatGPT have truly ignited a renewed excitement in the field of artificial intelligence, much like the buzz created when Watson played Jeopardy back in 2011. This high tide raises all boats, boosting market awareness for AI as a whole. The underpinnings of generative AI, including foundational models and large language models, have been integrated into various products of ours since 2020, with upcoming announcements to be made at the THINK conference in May. As this technology captivates the market, questions surrounding trust, transparency, control, and responsiveness arise. Enterprises are now seeking ways to ensure generative AI models utilize their own data to address customer queries effectively and responsibly. These pressing questions and more will be addressed at the forthcoming conference, highlighting the profound impact generative AI has had on the AI market as a whole.

How have companies utilized Watson to improve business processes?

Companies have been leveraging Watson’s capabilities to improve processes and develop innovative business models across various industries. For instance, Watson has augmented tax professionals by helping them identify potential deductions, while also aiding large accounting firms in M&A analysis, reducing research time by 20-30%. In the healthcare sector, Humana has used Watson to handle calls from providers, validating eligibility and benefits with 95% accuracy, which reduces the cost per call and allows agents to focus on more complex issues. Watson’s intelligent routing capabilities have also been utilized by a large ISV to cut ticket response time by almost 50%. Furthermore, Watson’s most popular use case lies in augmenting customer service through voice and digital agents or omnichannel experiences, enabling enterprises to strike the right balance between automated and hands-on customer service. This flexibility allows organizations to create tailored customer experiences, enhancing efficiency while simultaneously improving overall satisfaction.

Where do ethical ramifications and considerations stand when developing and deploying AI solutions? 

Ethical considerations are critical when implementing AI solutions, as they help ensure that these technologies augment human intelligence rather than replace it. Three core principles should guide AI development and deployment:

  1. AI should augment the human experience, supporting and enhancing human decision-making;
  2. Insights and data generated by the enterprise should belong to the enterprise, ensuring that AI providers don’t use them for their own purposes; and
  3. AI systems must be transparent and explainable, fostering trust in the outcomes and adherence to fairness.

Moreover, AI systems should prioritize robustness, security, and data privacy. To maintain a balance between human and AI expertise, it’s crucial to promote an ethical approach that emphasizes the augmentation of human intelligence with AI systems. For instance, AI can help process overwhelming amounts of unstructured information and present relevant data to human professionals, who can then make informed decisions.

How should companies evaluate AI and who should own the process?

Companies looking to adopt AI should act quickly to identify opportunities where AI can play a role and potentially give them a competitive edge. To get started, executives should adopt a strategic mindset, focusing on a specific domain or project where AI can help. Defining the business problem, scoping it out properly, and setting up clear success criteria are essential for effective AI implementation. Moreover, it’s crucial to have accessible, cleansed data to train the AI system and execute it in an agile manner, using short sprints to iterate and adjust as needed.

Regarding the executive responsible for AI adoption, it is typically managed by the data science team, the CIO’s office, or even chief customer officers, chief marketing officers, and chief digital officers on the business side. The democratization of AI has led to an increase in interest from various business units, which can have both positive and negative impacts. While it allows for broader experimentation and reduces dependency on the CIO’s office, it also emphasizes the importance of managing ethical AI concerns, such as trustworthiness, transparency, explainability, and security protocols. Therefore, having governance around the AI process is paramount as enterprises grow and expand in this space.

All businesses can use AI

Adopting AI has the potential to revolutionize various industries by augmenting human intelligence, improving processes, and enhancing customer experiences. As companies begin their AI journey, they must consider ethical implications, prioritize transparency and trustworthiness, and establish proper governance around AI processes. While interest in AI continues to grow across different business units, it’s crucial for enterprises to act quickly, strategically, and with agility to remain competitive. By carefully identifying suitable use cases, setting clear goals, and collaborating across different executive roles, businesses can harness the power of AI to drive innovation and efficiency in the modern world.

Speakers

Scott King

Scott King

Chief Marketer @ Krista

Manish Sampat

Worldwide Sales Lead, IBM Watson

Transcription

Scott: Well, hey there everyone. I am Scott King and I’m joined by Manish Sampat. Hey, Manish, how are you?

Manish: I’m good Scott. How are you today?

Scott:I’m doing good, enjoying the sunshine through my window here. And so, Manish is in the Watson Group over at IBM

And what we’re gonna talk about is a little bit about what Manish does, what he’s responsible for, what Watson is, some questions about AI, really just to understand where Watson has been, what it’s doing today, and where it’s going in the future. 

So, Manish, could you explain a little bit about your role at IBM and what that role entails? And also, what you’re responsible for delivering.

Manish: Thanks, Scott. And first of all, I want to appreciate the opportunity to be part of this podcast. So thank you guys very much. 

My role in IBM is to work closely with our various sales channels: direct sales, business partners, ISV partners, digital sales teams, and global system integrators. The ultimate goal is to help them drive Watson sales. 

So I work with them and also engage heavily with our product management team to make sure that we’re bringing the input from our customers, the queries from our customers, back into product, so that we are aligning with where the market demands are.

Scott: Super. So I guess you get a lot of the cool questions, right? You get all the hard questions.

Manish: I get a lot of the hard, or cool questions. There’s always unique ones that I’ve never seen before. But it’s an interesting role, and I really do enjoy it. And I love the variety of working with different lenses from different teams that I work with: direct sales, ISVs, business partners, etc.

Scott: That’d be fun. So, just explain to us in layman’s terms, right? Our audience is really across the board. Normally they’ll have a question, and we’ll actually have someone do the research to find the answer. And that’s more of the executive level, but we have practitioner-level people listening to the podcast too. 

So, in layman’s terms, how does Watson’s AI technology work, and what makes it unique compared to other solutions? Maybe if you could include some of the information that you give to other partners, sales teams, and ISVs, like how do you explain it in layman’s terms?

Manish: That’s a good question, Scott. So Watson really is our generic brand for the things that we do in artificial intelligence, right? So that’s what we’ve named it. And underneath that, in this AI field, we do a number of things, from speech processing to conversational AI, to advanced text analytics, to intelligent document processing and understanding. 

And all of these are powered by our market-leading natural language processing set of capabilities that underpin these capabilities, applications, if you will. And so when we talk about what we do, it’s a variety of those things. 

And when we talk about what really makes us unique relative to others in the market, first we’ve been recognized by Gartner as a leader in conversational AI with our product Watson assistant, as well as an insight engine with our product Watson Discovery. And we’re the only enterprise leader in both of those quadrants with Gartner. 

In addition, some things that we think make us unique is we allow our customers to customize and train Watson for their specific domain, and the data that you train it with, we believe the data that you use and the insights that are created belong to the enterprise. They don’t belong to IBM, and we don’t use that to train our models. 

In addition, we’ve really built these APIs and applications for business users to use. There are low code, no code-type interfaces. They are underpinned by auto-learning, so they learn from the interactions and get better over time. The point being is that we’ve set this up so that with little data, you can get these APIs or applications deployed quickly and drive value for your organization. 

And then finally, we allow our customers to deploy these services anywhere. You can run them in IBM cloud, you can run them in your data center on-prem if you have data privacy concerns or considerations, or if you have a hyperscale, we can deploy our services on the hyperscalers as well. 

We think those are some of the key differentiators for why Watson is successful in the market.

Scott: Data privacy, right? There’s always a whole lot of questions around data privacy because maybe people don’t understand where it goes and where it gets crunched and where the output is. But we get a lot of questions about that too because our customer base is global, so there are different rules and regulations. 

So, I’m curious, we’ve answered this question before on our podcast about how generative AI really just elevated everyone’s expectations and maybe changed their perspectives of what this can do. How has the past five months with all the generative AI chatter affected your group, the Watson group, and what kind of questions do you guys get about it?

Manish: No, it’s a great advancement in AI, I believe high tide raises all boats, and that’s what we’ve really seen with Generative AI. What it’s done for us has really rekindled the conversation. 

We haven’t seen this type of excitement in AI since Watson played Jeopardy way back in 2011. It sounds like it was a long time ago, but really it wasn’t that long ago. Generative AI is rekindling these conversations with our clients and the minds of our clients in terms of the capabilities. 

We’ve been using the underpinnings of generative AI, foundational models, and large language models in our products since 2020. We have a number of announcements that we’ll make at THINK, our big conference in May, around generative AI. 

But questions are coming out around trust and transparency, such as when we train it, what data is being trained on, how do I control the generative AI from hallucinating and going off-kilter, which we’ve heard some stories about. How do I make sure it’s being responsive? How do I get my data? How do I use my information to respond to my enterprises? 

Because the generative AI solutions that have captivated the market are general purpose models. At the end of the day, if you’re an enterprise, you want the generative AI using your data to answer your customers’ queries and questions. Then, how do I take that and utilize it in my enterprise? These are the questions we’re asking. Some things will be answered at our conference in May.

Scott:  May, I think I’m already out of town, so I don’t think I can make it. But a couple of our guys are going, so I’ll get feedback from THINK. So, people are asking more questions about AI and I totally forgot about Watson and Jeopardy, but now you have mentioned it. 

Can you share any interesting ways companies have utilized Watson to improve processes or develop innovative business models? Are there any use cases beyond what you mentioned earlier?

Manish: There are a couple that come to mind, at least to me. One of my personal favorites is from years ago when we trained Watson to assist a tax firm with completing people’s taxes and helping the professionals. 

We’ll talk more about the idea of augmenting humans, but Watson was really augmenting the tax professionals and helping them identify potential deductions when itemized deductions were a thing. It was helping those tax professionals to identify deductions, which was one of my personal favorites.

Since then, we’ve seen things like using Watson to help a large accounting firm that does M&A analysis. We’ve actually augmented their solution with Watson, where it analyzes financial reports, analyst reports, social media, and extracts information and insights for the analysts. 

This allows the analysts to focus more on decision making and analysis rather than spending time doing research. We’ve been able to trim research time down by as much as 20 to 30%, making them more efficient.

I want to highlight two other examples, one of them being Humana, which is publicly referenced. They’ve actually used Watson, and this goes back to one of the conversations about tuning Watson. We trained our speech engine on healthcare to the point where we achieved 95% accuracy. 

So Watson is actually handling calls for Humana from providers, validating eligibility and benefits, and they’ve seen the cost per call drop to under a dollar. This has allowed their agents to focus on more challenging issues and questions to drive their NPS or Net Promoter scores.

And finally, another unique example is when we worked with a large ISV. If you had an issue with their software, as an enterprise, you had to fill out a ticket online, which would take two to three weeks to get sorted out, triaged, and responded to. 

We actually put Watson and some of our capabilities and built in Intelligent Routing into the ticket and into the portal. Watson will analyze that information, identify if there’s anything, what we’ll call a quick fix. If there’s already a ticket created or an answer or response, we would provide that back to the enterprise to see if they could solve that. If not, we identified the appropriate engineer internally and routed the ticket to them. 

This shaved almost 50% of the time it took to get these tickets answered. These are just some different examples of how we’ve used Watson. I like to think of Watson as being very flexible and able to handle any number of different things.

Scott: The last example, the ISV, is the easiest one to think about because when you call customer service, you can’t articulate your situation or issue in a web form. It’s too complicated. So when you call, you just want to talk to a person as he can understand all of the intent and categorization and figure out who needs to work on your ticket. If I call American Airlines, I’m immediately saying agent, agent, because it’s just too complicated. 

But the AI can do that. You just have to figure out, okay, I want to try this situation. I want to try it in customer service. That’s the easiest place to start because it’s a high-cost with a high-turn individual. The agents turn 100%. So I love that use case, and that is definitely the most popular one.

Manish: And the most popular use case we see is around augmenting customer service, whether that’s through a voice agent or a digital agent or other channels. Leveraging Watson to provide that omnichannel experience, regardless of where the customer wants to engage with you, and designing the experience the way the enterprise wants.

I’ll give you an example, Scott. I work with a financial institution. They were going to handle most of the mundane things with a virtual agent. So if I had a question about my account or one of my deposits cleared, things of that nature, the virtual agent could handle it.

But if someone came in and said there was fraud on their account, they immediately escalated to a real agent. They didn’t want to have a virtual agent handling that because that could be a very challenging issue, and they want to provide hands-on customer service. 

So, AI can allow enterprises to create a better customer experience by making a decision around what journey they want to automate, which journeys they want to accelerate or share with an agent. And then taking the time to work with the AI system to give it the information so it can clarify, disambiguate, as we call it, the types of information it needs to provide an answer, or if not, at least escalate it to a human so that the human has all the information and can resolve the call much, much quicker.

Scott: And in situations like that and customer service, you can trust AI to make a decision, but some situations you can’t. You get into all these ethical ramifications and considerations. How do you guys take that into account when developing and deploying AI solutions? There are some situations where you don’t know if you can trust a computer to make the right decision. How do you handle that?

Manish: That’s a great question, and it’s become a highly popular topic with generative AI in the last five months – all around ethical AI. What we’re seeing nowadays is that we’ve been a leader in ethical AI. We have our own AI ethics board that reviews all of our AI initiatives and efforts within the company. 

We have three core principles, and all this information about our principles, by the way, is publicly available around AI. Number one, the purpose of AI is to augment the human experience or human intelligence. So we don’t look to replace humans but to support them. The example I gave you, the M&A example, is one such example. 

Secondly, we really believe that the insights and data created by the enterprise belong to the enterprise. So we don’t use it for our own purposes. 

Third, we believe that all of these AI systems have to be transparent and explainable. Generative AI and some of the hype in the last few months have really raised the bar on this one in terms of understanding how those models really work. 

When we talk to our clients about adopting AI and the responsible adoption of AI, we start to focus on things like explainability. The AI systems should be transparent, and the users should understand what went into the recommendation, but the outcomes need to provide fairness.  They need to treat individuals and groups of individuals fairly. 

The systems also need to be robust in the sense that they have to be secure, and minimize security risks. We should not make it accessible to third parties or external agents. I mentioned transparency earlier, and I’ll reiterate that, because we want to make sure users are able to see how the service works. They understand the functionality, and have trust in the outcomes coming out of the system. 

Finally, and most importantly, we have to prioritize and safeguard the customer’s data and the data rights that are in there. We’ve seen a whole plethora of regulations around data privacy, and so we believe that these systems have to have that data privacy in them.

Scott: Even internally, you can’t have AI go haywire and have me ask about internal questions if I have accessed all this data. Because I’ll figure it out, and I’m always outside of the rules. You talked about assistive AI, and you touched on trust in the outputs of the AI. There’s this human and AI interaction. What’s the balance between human expertise and AI expertise? How do you guys converse with prospects, customers, and partners about managing the human and AI expertise so that we all have to get along? What do you tell people about Watson?

Manish: When we talk about this, we really discuss the ethics I mentioned, which is augmenting human intelligence and the process. The example I gave of tax professionals is where we augment by highlighting potential itemization deductions, but the tax professional makes the ultimate judgment. 

In the M&A example, we pull information from social media, analyst reports, and financial statements, presenting it to the M&A analyst to incorporate into their analysis and use it as a model. 

Our view is that AI’s purpose is to augment, not replace human intelligence. Today, we’re bombarded with unstructured information from emails, social media, text messaging, and the internet, making it overwhelming for humans to process. AI systems can help process information and bring back relevant information so individuals in enterprises can make decisions and augment their job. 

IBM has committed to supporting workers as they transition by investing in global initiatives and skills training for AI development. We focus on helping people upskill and find opportunities within this new normal.

Scott: That’s going to be a big initiative. There must be some nonprofit out there that focuses on upskilling. Because it’s not going to replace people; it’s just going to make people better. At least, that’s my perspective. 

However, one must learn how to use it. I have played with generative AI, and it has gone haywire on me. As you mentioned, people are exploring how to use AI and how to augment people. What advice do you have for business executives who want to adopt AI but haven’t done it yet? They’re searching for their first project, their first use case. What do you tell them about AI?

Manish: It’s interesting. A study that IBM’s Institute for Business Value did said that almost 40% of the companies they had interviewed were piloting AI, and another 40% were considering it. With the interest around generative AI, we’re seeing an uptick in this. So for companies or enterprises not looking at it today, they may be potentially behind their competitors or industry. 

I would say number one, act quickly. Move quickly to identify opportunities where you think AI may play a role. To get started, begin with a strategic mindset. Have a strategy and find a project, a domain where you think AI can help. Define the business problem or issue you’re trying to solve and scope it out properly so you understand what you’re trying to accomplish. Set up and define what the business outcome should look like and what success criteria should look like, so you have some measurable or qualitative measures of success. 

Finally, for any AI system, you need to train it on relevant data to the business problem. Make sure you have accessible, cleansed data to train the AI system and try it against. Then execute in an agile manner so that you’re iterating and not spending too much time. These should be sprints, two-week sprints to see where you’re at, and then readjust as needed. I would encourage people to try, take small steps, and understand how to utilize AI.

Scott: It makes sense, except for the elephant. I’ve never eaten one, nor would I ever try. But I get the analogy. With the two-week sprints, strategy, and looking at AI, among the companies and people you talk to, is there a congruent person or role that’s looking at it? Is it someone in the CIO organization, like the AI guy? Is it the chief digital officer or the CISO? Is there any pattern like if you could apply AI to those, all those conversations, who’s the least common denominator?

Manish: More often than not, it’s gonna be in one or two departments. This is where the enterprise takes on this. It’s gonna be either in the team that’s doing the data science and the data scientists who love to play with this and are looking at this stuff, or it’s the CIO’s office that’s exploring ways they can bring this in to make the enterprise more efficient. 

But candidly, as we spoke about earlier, customer service is a key area where we’ve seen repeated success in using AI. So your chief customer officer, your chief marketing officer, your chief digital officer, and all those folks on the business side that manage that type of interaction or engagement are also prime candidates for this type of technology, in addition to, let’s say, the traditional CIO, CTO’s office or the data science team depending on where they reside.

Scott: I’d love to be a fly on the wall in the CIO’s office when all those people came to the CIO and said, hey, I want to try AI. There was an article in the Wall Street Journal a couple of weeks ago. We did a podcast about it that I’ll point to. And just how CIOs are being inundated with AI requests now. Because now everybody sees it, they change their expectation, like how do I try it? So I’d love to hear all those requests.

Manish: So that’s the democratization, for lack of a better term, of AI, to a point where business units, non-developers can actually start to play with it, and that’s what we’re starting to see in the industry. And that’s good and bad.It’s great because we’re seeing the interest, we’re seeing people try it. It’s good also because then there’s not such a dependency on the CIO’s office, but it’s bad because the CIO’s office is still the one that has to manage all the things we talked about: ethical AI. 

That’s the other piece of the equation. They need to manage trustworthiness, transparency, and explainability. They need to make sure it fits within the security protocols, etc., of their organization. And so managing that and having governance around the AI process is also paramount for these enterprises as they start to grow and expand in this space.

Scott: Definitely. I was just going to say governance in the AI process, that definitely has to be a future podcast episode, because we get a lot of those questions too. Well, cool, Manish. I have some kind of off-the-wall rapid-fire questions for you. I normally don’t do this, but since you represent such a unique brand, I was kind of curious. If you could give IBM Watson a voice, like if it were a person, like a celebrity or a famous character, who would you choose, and why would you choose that?

Manish: Great question, Scott. Let me say this first. I’m going to put a plug in. In Watson, our speech capabilities, you can create your own custom voice and give it a celebrity voice. For whatever reason, the first name that pops in my head when you ask that is James Earl Jones or George Clooney. Those are the ones that I envision and can hear in my head if that makes sense.

Scott: If IBM Watson had a hobby or favorite leisure activity, what do you think it would be?

Manish: I think Watson would be interested in something like the America Cup, the sailboat races, or Formula One racing. The reason being, as we discussed earlier, Watson is processing a lot of data, both structured and unstructured information, and providing that information to augment and assist the driver or skipper of the boat in making decisions. Those are the areas where Watson would find a hobby or interest, if you will.

Scott: I like the Formula One example. I actually rewatched a race last night because I finished the whole F1 series on Netflix, so now I’m watching recorded races. The orchestration and speed of how everything happens, especially amongst all the data, people, and processes, is very similar to what we talk about with intelligent automation and using AI to enhance that. So the F1 example is perfect. I mean, do you like F1? You chose those examples. What’s your favorite leisure activity?

Manish: I’m not an F1 driver. I did like NASCAR and other racing growing up, but my favorite hobby or leisure activity is traveling. In terms of sports, I love playing soccer, and I’m trying to learn how to play golf. So those are where I spend a lot of my time right now.

Scott: All right, I like that you enjoy outdoor activities. You live in a place where the weather’s good all the time, so you can easily do both of those. We have the same weather here in Dallas. If IBM Watson were to write a book about its experiences working with businesses, what do you think the title would be?

Manish: There’s an old business book that I read in grad school called “Competing for the Future,” and I think that would be a fitting title for a book written by IBM Watson about its experiences working with businesses. With Watson and AI, we’ve seen advancements from playing Jeopardy to generative AI, but the future is still ahead of us. 

We’re still at the early stages of the race, and I like to think of it as a marathon where we haven’t even crossed the five-mile mark. It’s about finding opportunities for enterprises to augment their processes with AI and Watson, make them more efficient, and help them drive success, whether that be financial or otherwise. We still have a long way to go, but “Competing for the Future” would be my choice for a title.

Scott: Well, I appreciate your insights, Manish. Before we end, do you have any closing thoughts or advice for executives based on our conversation today? We covered topics such as ethical AI, processes, and customer service. Is there anything else you’d like to add?

Manish: I tell companies and enterprises that it’s important to just get started, even if it’s something small. Just pick something small and try it out, even if you fail. You can learn more from your failures than your successes sometimes, so if you don’t try, you won’t learn. The sooner you get on that path, the more opportunities you’ll find for success with this technology and its future capabilities.

Guide:  How to Integrate Generative AI

Close Bitnami banner
Bitnami
Close Bitnami banner
Bitnami