Integration at the Speed of AI

June 14, 2023

Implementing generative AI or any other form of AI within a business framework demands more than mere installation. Businesses need to be confident that their data remains secure and that there are role-based administrative procedures present in managing the least privilege. The AI should be capable of synchronizing with current systems in real-time to provide pertinent information and precise responses to users in any format. In instances where a response or conversation involves additional personnel, the AI should maintain the same context for all of the parties. This method ensures everyone comprehends the narrative and collaborates towards the identical objective. Most of all, businesses need to connect and change AI services almost instantly. Businesses cannot continue with slow and resource-intensive deployment and integration cycles.

This is the first of three podcasts where we explore quickly connecting AI solutions. This first episode discusses three enterprise requirements for connecting generative AI into your business to improve employee and customer experiences.

    1. You must connect generative AI to enterprise systems in real-time 
    2. You must be able to instantly integrate and interchange different AI tools and services
    3. You must be able to efficiently synthesize vast data volumes 

Links and Resources

Speakers

Scott King

Scott King

Chief Marketer @ Krista

Chris Kraus

VP Product @ Krista

Transcription

Scott King:

Welcome, I’m Scott King, and this is Chris Kraus. We appreciate you tuning in to the Union podcast. We’re about to begin a compelling three-part series, focusing on integrating generative AI, like ChatGPT or BARD, into your business. Chris, you authored a paper, “How to Integrate Generative AI into Your Enterprise,” on this topic a few months ago, discussing how to incorporate ChatGPT into an enterprise. Could you share the feedback you’ve received? Did it answer readers’ queries? What are their responses after going through the paper?

Chris Kraus:

What often surprises readers is the existence of a clear process. They begin to contemplate the first step of problem-solving: do I possess the necessary data? Is the answer within my reach? And if it is, for instance, scattered across three sources, can I access it? They then understand that the final stage involves requesting the AI, such as generative AI, to consolidate the correct responses. The appreciation stems from the realization that the tech isn’t as opaque as perceived. Rather than being a mysterious black box touted online, there are discernible steps: acquiring the data, comprehending it, searching for specific data segments like an entry in a table, and finally, getting the AI to summarize. Sometimes, the desired outcome is empathetic, sometimes it’s a standard response. It dawns on them that a diverse set of tools is required to make this process function.

Scott King:

From a user perspective, asking questions may seem straightforward, but establishing the necessary connections is a complex task, isn’t it? Now, let’s consider the factors that enterprise tech professionals should bear in mind while attempting to adopt this technology. First, you must ensure that your enterprise systems can connect in real time. As AI services and products are in constant flux, outpacing one another every week, you need the flexibility to interchange them as required. Secondly, let’s discuss data. Enterprises accumulate vast amounts of data in various forms: databases, applications, PDFs, notepads, Excel files, etc. How do we feed this data into the system for processing and summarization? Let’s start by discussing how to connect an enterprise’s systems in real-time. What should professionals bear in mind before initiating a project of this nature?

Chris Kraus:

The first thing to grasp is that this integration is feasible. Many people have had the revelation of understanding that pre-trained models like ChatGPT, which have scanned the internet and created a database of information to respond from, do not provide real-time data. At the enterprise level, when asking a question, you want current data. Let’s take an order management system that’s updated throughout the day as an example. To inquire about orders, you need real-time system data. Similarly, with something like an HR manual, which is updated once or twice a year when policies change. People begin to grasp this concept, but the exciting part is recognizing the need for an integration backbone capable of drawing information from various systems. For instance, if you’re managing orders and you have up-to-date orders, shipments, and product quantities, you’d want a system that can access information from warehouse management systems, order management, and sales and distribution in real time. This would allow customers to inquire about their own orders, get information about their usual products, and ask questions based on that data collection. What this real-time connection does is eliminate the need to anticipate all possible questions a customer might ask. Traditionally, you had to anticipate all the queries and create reports accordingly. Now, you have a collection of data and can ask novel questions about this data without knowing all the answers in advance. For example, loan rates for a HELOC, a 30-year, or a 15-year fixed, which change daily. You might ask which offers a better rate today – a HELOC or a 30-year fixed? A few months back, the 30-year fixed might have been a better option. But now, as interest rates have risen, a HELOC could be better. The point is, you can’t anticipate the questions, but you can ask them, process real-time data, and make comparisons or what-if scenarios based on current data.

Scott King:

What about the practicalities of connecting these various systems with differing data formats or speeds? We’re talking about connecting to real-time systems and formulating intelligent prompts for answers. What if the data formats from different systems like order management and warehouse systems are different? How does that interact with generative AI?

Chris Kraus:

What we aim to do is make all dates unambiguous. Instead of saying “0101 2021,” we make it “January 1st, 2021,” to eliminate any ambiguity for the engine. We also normalize certain things, using a common type of name for a customer or defining order numbers, for example. When loading these into the AI model, we standardize them so that the AI can make correlations between the data.

Scott King:

Certainly, that standardization must be necessary. Before we move on, is there anything else on real-time systems? This is the main question we receive: “How do I get my data into ChatGPT or BARD?”

Chris Kraus:

The main thing to note is that it is achievable. There’s a method to it, and we can delve into the details if you’re interested in the technicalities.

Scott King:

Alright, let’s discuss the integration of diverse AI tools. You can integrate anything with anything, but it takes time. With the pace of AI innovation, keeping track of all these AI tools would be an impossible task. When we talk about effortlessly integrating and possibly interchanging them, we’re talking about adapting to the changing needs, like the need for a different language translation when expanding to a new region. Can you discuss some of these scenarios and their complexity?

Chris Kraus:

What you need is an AI Integration Platform as a Service (iPaaS). This isn’t your typical MuleSoft, WebMethods, or Tibco iPaaS. It’s purpose-built to provide consistent requests from the user’s perspective, like asking for the sentiment of a statement, its keywords, or a summary of the content. On the front end, you want this consistency. On the backend, the platform’s role is to translate requests into different AI languages. There are numerous AI engines like Bart, Baird, Boomi, Watson, etc., all offering sentiment analysis. This week, OpenAI’s ChatGPT might excel in sentiment analysis, but in three months, Watson might take the lead for both English and French sentiment analysis. The iPaaS should let you swap the AI engines underneath seamlessly. So, while your application connects to the iPaaS for AI services, the backend can change between different AI engines without the user knowing. Given the rapid advancements in AI, in six months, there will be a completely different landscape with multiple vendors offering these services. Over time, some will gain strength, and others will lag behind. You might want to swap out one AI service for another, but you don’t want to rewrite your application or worry about changing JSON and XML formats. You want a plug-and-play system, which is what an AI iPaaS offers.

Scott King:

You did mention JSON and XML, and that this is different from traditional iPaaS like MuleSoft or SnapLogic, which many people use. What are the limitations of these tools when it comes to connecting with AI services?

Chris Kraus:

Two things. First, yes, these tools can build connectors to a multitude of services and surface them. But you need a middle logic layer that knows how to orchestrate multiple AI services into one to accomplish tasks. For instance, you might want translation, sentiment analysis, and keyword extraction all within one service instead of needing the business person to call these three separate things on separate endpoints. Some aggregated services will be needed, presented in a human-readable way in a catalog. A swagger spec or open API spec, while great, isn’t something everyone can understand. Therefore, you want some orchestration across multiple services, along with some intelligence to decide which service to use based on the language. The platform shouldn’t just present atomic building blocks; it should aggregate functionality and orchestrate services, shielding users from having to know these nuances.

Scott King:

Regarding the time difference, you’ve mentioned the need to interchange services. Some workflows might use one service, others another, or maybe swapping services over time. You could, of course, do this manually, hire a developer and have them build it, but that would take a lot of time. When you say you need to interchange services, what kind of timeframe are we talking about? Is this something I could do with an API key, or do I need to develop an entirely new app? What’s the difference in time?

Chris Kraus:

The key advantage of a well-designed AI iPaaS is that it allows for dynamic changes. You can test one service, swap it out, and test another, all by selecting from a dropdown menu. If you want to compare the sentiment analysis capabilities of three different tools, it’s as simple as that. Now, if you were developing such a capability yourself, you’d first need to design an application to shield yourself from these various interfaces. Each time you want to plug in a different service, you’d need to research, test, and evaluate each option, which can take weeks to months. The alternative is to hard-code each connection, which would be akin to stepping back into the “old-school” 1999 approach. Every time a connection breaks, you’d have to create a new version of the app, compile it, and deliver it.

To avoid that, many people resort to point-to-point connections when they need things done fast. If you opt to spend six months designing a platform and a harness for this, you’d essentially be sacrificing valuable time that could be used for other tasks. Therefore, it’s much more efficient to use these things pre-built as a service.

Scott King:

Yes, it would indeed be time-consuming, and most organizations already have extensive backlogs.

Chris Kraus:

Exactly.

Scott King:

If a company didn’t have any integration backlogs, they could potentially handle it in-house, but that’s an anomaly. In most cases, businesses can’t keep up with the pace.

Chris Kraus:

Right.

Scott King:

So that’s all about connecting and switching out systems. Let’s discuss the data aspect. I’ve used ChatGPT to clean up podcast transcriptions, but the transcriptions are often too long, exceeding the character limit. ChatGPT is effective in removing filler words like ‘like’ and ‘so,’ but when connecting to real-time systems and swapping out AI services, the need to process more data arises. How do we overcome this limitation?

Chris Kraus:

The first step to overcoming data limitations is gathering as much data as possible. You’d want to create a document store to hold all kinds of documents such as PDFs, Word files, Excel files, PowerPoints, even web pages. The catch is, this data must be documented somewhere, it can’t be tribal knowledge in someone’s head. Once you’ve ingested this data into a document store, you can leverage OCR and other technologies to read and understand the content, thus creating the “problem” of having too much data.

The second step involves addressing the issue of data overload. For instance, if you search a document store for return policies, you’re likely to find multiple versions for different regions. The information returned may exceed the capacity of generative AI which typically requires smaller data blocks for effective summarization. It’s crucial to devise a strategy to break down large data chunks into manageable pieces and derive summaries of different components. However, having too much data at the start is preferable because it means we have all the right things for effective searching. The trick lies in knowing how to reduce data chunks so that the AI doesn’t return an error due to data overload.

Scott King:

So it’s about refining the search and focusing on the specific areas that are most relevant to the query, like narrowing down a search to a specific neighborhood, correct?

Chris Kraus:

Exactly. You want to narrow the focus to give the AI only the right answers to summarize, eliminating any unnecessary fluff.

Scott King:

That makes sense. An employee’s question, for instance, would be quite specific. They might want to know how to submit their cell phone bill for reimbursement, which would involve searching the relevant section of the company’s policy document. So, it seems the challenge is in synthesizing vast quantities of data while ensuring the process isn’t hampered by data volume. Is there anything else we need to know on this front?

Chris Kraus:

As we’ve discussed, the key lies in understanding there are two types of data. The first is policy or written data contained in documents such as PDFs or PowerPoints, the more of this data we have, the better equipped we are to answer a variety of questions. The second type of data is real-time data from systems like order statuses or inventory levels which also need to be made available for effective question answering.

Scott King:

I see, this would definitely save time as we wouldn’t need HR professionals manually searching systems for answers.

Chris Kraus:

Exactly.

Scott King:

Well, that wraps up this episode. Thank you, everyone, for tuning into the first part of our series on how to integrate generative AI into your enterprise. In our upcoming episodes, we’ll explore automating processes, involving people in these processes, and ensuring proper data governance. Thanks, Chris, for joining us. Until next time.

Guide:  How to Integrate Generative AI

Close Bitnami banner
Bitnami
Close Bitnami banner
Bitnami