Generative AI for Agile Knowledge Management

November 15, 2023

Generative AI is Accelerating Everything

Generative AI (GenAI) is influencing nearly all processes in our businesses and none so much as knowledge management systems. Employees want a better experience and they have found it by already experimenting with GenAI; ask a question, and get an answer. But, the answers and the knowledge delivered to them via the public interfaces aren’t always correct.

Julie Mohr is a Principal Analyst at Forrester covering IT service management and enterprise service management. I spoke with Julie about how successful knowledge management strategies are evolving and how GenAI is accelerating change. Julie spotlights the shift from old-school waterfall techniques to agile knowledge management strategies and describes how GenAI is set to overhaul how companies capture, update, and apply organizational knowledge.

Here is my take on the conversation.

The State of Knowledge Management

A significant number of companies relegate knowledge management systems to a secondary priority. They deploy outdated and slow waterfall deployment schedules that hinder innovation. This approach results in sluggish and cumbersome knowledge sharing, where organizational knowledge sharing is outdated or doesn’t exist in written form and is left to tribal knowledge. Employees find themselves in a bind, struggling to locate pertinent and up-to-date information, which inevitably leads to delays in decision-making and missed opportunities. If knowledge is written somewhere it is often confined to Word documents and entangled in complex filing document management systems that are difficult to navigate and have obscure naming conventions. The already challenging process is seldom updated due to its lack of priority causing disparities between documented knowledge and current requirements. Without shifting toward more agile and responsive knowledge management practices, companies risk falling further behind, fostering stressful work environments contributing to high employee turnover and poor customer service.

The Future of Knowledge Management

As we steer into the future, it is becoming abundantly clear that organizations must adopt agility in their knowledge management process to serve real-time knowledge within business processes. The traditional waterfall approach to knowledge creation, perfection, and sharing has proven too slow for today’s fast-paced business environment, often leaving crucial insights outdated or irrelevant. By contrast, an agile knowledge management process—characterized by immediate availability of information, continuous refinement, and universal ownership—brings vitality and relevance to the organization’s corpus of knowledge. When everyone takes part in knowledge creation, it becomes a living, breathing entity that is an accurate reflection of its time, delivering precise answers when they’re needed the most. It’s not just about having knowledge; it’s about having the right knowledge at the right time, and that is the future of knowledge management strategy.

The Role of Generative AI in Agile Knowledge Management

Generative artificial intelligence and large language models (LLMs) are transforming complex and abundant data into user-friendly formats and therefore transforming knowledge management practices. Natural language processing (NLP) technology helps reduce challenges around capturing, storing, finding, and transferring knowledge to and from your employees potentially altering how information is managed across your organization. GenAI also helps you accelerate knowledge transfer enabling quick and iterative knowledge delivery to customers by changing how end users interact with knowledge inside your knowledge management tools. It provides intuitive, conversational interactions, moving away from lengthy lists of answers or pointing customers to knowledge articles, to a more engaging, real-time dialogue tailored to the user’s understanding level. Additionally, GenAI enriches the employee experience, making knowledge transfer less tedious and more autonomous. It minimizes application switching, reduces the endless search for information, and alleviates repetitive tasks creating a more meaningful employee experience.

Getting Started with Generative AI

Integrating generative AI models into your knowledge management system doesn’t have to be overwhelming; starting small is key. Technology leaders and knowledge managers should identify specific use cases within knowledge management where GenAI can bring value, such as content creation, knowledge organization, or search and retrieval. Knowledge management and company policies are excellent starting points. Utilizing platforms like Krista can help govern data, ensure accuracy, and limit hallucinations, providing a controlled environment to learn, iterate, and expand the use of genAI for more use cases. Prioritizing data quality is crucial, as the output of artificial intelligence is only as good as the input data. We’ve all heard “garbage in; garbage out” with other IT projects, and the same applies here. Developing projects with clear objectives allows for testing and evaluating GenAI’s feasibility, ensuring an iterative approach for continuous improvement. During our discussion, Julie Mohr emphasized the urgency of embracing GenAI, stating,

“The thing that I think is the most risky is not embracing it. Employees are reading all the hype and they’re trying it… So by beginning and exploring this as quickly as you can with your vendors that are trusted, with your data that’s already got the tight controls around it, you’re likely to get the best benefit from utilizing generative AI as quickly as possible.”

"The thing that I think is the most risky is not embracing it [Generative AI]."

This approach ensures a strategic and controlled integration of GenAI into your knowledge management system, or any other business process therefore maximizing benefits and minimizing risks.

Links and Resources

Host

Scott King

Scott King

Chief Marketer @ Krista

Guest Speaker

Julie Mohr

Principal Analyst @ Forrester

Transcription

Scott King

Welcome to this episode of the Union Podcast. I’m Scott King, joined by special guest Julie Mohr. Julie is a Principal Analyst at Forrester, serving ITSM and enterprise service management clients. Julie, could you share your current projects, and your coverage area, and finally, let’s discuss knowledge management?

Julie Mohr

Certainly. I cover IT service management and enterprise service management, primarily focused on infrastructure and operations. I also work a bit with platform teams and developer experience. However, knowledge management has been my passion since joining Forrester nearly two years ago. I’ve developed an extensive amount of research related to knowledge management.

Scott King

I see, let’s clarify, when you refer to knowledge management, do you mean any type of knowledge? I’m also interested in understanding the evolution of knowledge management, its past and potential future.

What is knowledge management?

Julie Mohr

Indeed. My initial focus on knowledge management was part of IT service management due to its crucial role in the infrastructure and operations group. For instance, consider a support call interaction wherein I’m looking up potential resolutions in a knowledge base. However, knowledge management is an enterprise process. It’s not merely data management, it’s about providing the right information to aid decision-making across the enterprise. Sometimes, this requires access to knowledge outside of your core systems, crucial for supporting decisions. Hence, it is an enterprise-level process. You asked me two parts there, Scott, I’ve addressed the first one.

Scott King

I was already jumping the gun, right? I manage own knowledge management content. Because I write everything down because I get a lot of where is, what is questions. But, you know, a lot of it’s manual, right?

Julie Mohr

We’re excited about the content.

Scott King

I assume we’re moving from manual updating and maintenance to AI.

Will AI generate all this for us?

Julie Mohr

First, we have to look at existing knowledge management process. Most organizations have a waterfall type of approach where creation, the creation, perfection, and sharing of knowledge takes too long. So it is available a couple of weeks, maybe a month later. I advocate for an agile knowledge management practice. Organizations should focus not just on creating knowledge within workflows but also surfacing that knowledge as quickly as possible, even if it may be imperfect.

The information is available immediately, supporting decision-making. Before we talk about generative AI, organizations need to have a good practice in place. There’s a need to train an LLM or fine-tune an LLM on relevant information.

If you have a poor knowledge management practice today, you’re going to introduce issues when you fine-tune that LLM. By building an agile knowledge management practice, everyone is focused on that as being a core part of their job. We’re storing and perfecting as we’re using knowledge. If you stumble upon something and realize it’s incorrect, you take ownership.

KCS is a great agile knowledge management example for service and support organizations. You really need to have this agile practice where everyone’s focused on knowledge, and it’s relevant and up-to-date. Then, we can start talking about how generative AI will influence us.

Scott King

When you mention agile, can you quantify that? Are we talking about a month, a week, a day, an hour, a minute, a second?

Julie Mohr

Ideally, knowledge should be available immediately. I understand some people worry about inaccurate information. However, the way to refine knowledge, make it work and be effective, is by using it when necessary. For instance, consider the time spent perfecting a knowledge article that resembles a Word document. It requires a lot of work to ensure correct spelling, subject-verb agreement, punctuation, grammar, and style. If no one uses that knowledge article, the time spent perfecting it is time wasted.
In an agile approach, knowledge is part of demand. If someone calls in or another employee has a question, we capture that in real-time. If that question never comes up again, we don’t spend any time perfecting it. If it does, then the person who surfaced that information is responsible for its accuracy.

We perfect within our workflows rather than outside of them. This approach economizes the effort we put into knowledge management. Knowledge articles should resemble PowerPoint slides rather than Word documents. There’s a significant difference in both structure and the way we write a PowerPoint slide – a brief statement that conveys enough information to get the point across.

Contrasting a Word document with a PowerPoint slide, it is evident that the latter is easier to create while working and easier to read. Keyword optimization is more effective with simpler text, improving findability. This value comes from an Agile Knowledge Management practice, which provides just-in-time knowledge. It’s precisely what one needs to find an answer. Everyone is responsible for keeping that knowledge up to date. It’s an effective way to manage knowledge in an organization.

Scott King

Indeed, the popularity of mobile devices and the ability to generate content or knowledge has conditioned us to expect real-time information on almost anything. However, the workplace often lags behind in this regard. I wanted to understand the timeframe of knowledge generation because it’s important, especially as things change so rapidly. Let’s consider the generative AI aspect. People have used public gen AI tools like Bard or chat GPT, and this has reshaped their expectations.

How have the advancements in LLMs and Generative AI models influenced knowledge management systems?

Julie Mohr

Certainly, one of the positive use cases I see for generative AI centers around knowledge management, particularly when we bring an LLM internal to the organization. When data policies or governance aren’t a concern because we’re using it internally, the capabilities of generative AI can greatly boost agility within knowledge management. Take search, for instance. It was often tricky because an end user or an internal employee might pose a question quite differently from a technology person.
Let me give you an example. My stepfather used to call me for support saying, “The Internet’s down.” In my context, that meant he was having network connectivity issues. If he came across a knowledge article titled “network connectivity” when he typed in “Internet is down,” he wouldn’t click on it, fearing he might do something wrong.

Scott King

Indeed, the major ocean cable suffered a cut, correct?

Julie Mohr

Yes. Now, users can search using their language. Thanks to extensive training of the LLMs on a large corpus of language, they can understand and match user intent with relevant responses. A carefully engineered prompt can meet the user where they are. For example, if I say, “Explain network connectivity as if you were talking to a five-year-old,” it simplifies the user’s understanding. Hence, the search capabilities are immensely improved by this language understanding.

Then we consider the ability to create new knowledge articles. I can create a prompt that says, “Based on this transaction with a user or customer, take these fields I’ve populated within Salesforce or my IT service management system. Put these fields into this following knowledge template.” All of this is done with a click of a button. Instead of crafting a knowledge article from scratch, the individual receives a draft, approves it, and it’s done. Generative AI can speed up the agility of a knowledge management practice and overcome traditional hurdles seen in knowledge management practices.

Scott King

An important aspect to consider is the example of incorporating customer interaction into a prompt along with an existing template. That’s a complex prompt that the general customer service or IT SM person might overlook. We’re looking at creating a knowledge article, but the LLM lacks context. Your example provided full context. That in itself should serve as a knowledge article. It’s crucial to structure these prompts properly. If you lack the right context, we might end up confusing “network” with “internet.” And for a five-year-old, they’re simply seeking power outlets, relying on the battery of their device. Therefore, generating the correct context for your prompt is a significant deal.

Julie Mohr

I had the opportunity to explore the power of the LLM through a specific use case. About a decade ago, I built a bot from scratch. I wanted to understand what it required as chatbots were emerging in the industry. I had a corpus of knowledge that I intended to use for training this bot. It was an immense amount of work, particularly inputting the information and ensuring it matched the right intent with the appropriate responses. The variety of questions posed by the chatbot only added to the workload, as I had to connect the intents before the system could self-learn. The project took a long time, and I’m not sure if the chatbot ever functioned as I had planned.

However, the new capabilities and the language comprehension that comes with generative AI have revolutionized the process. Now, I don’t need to worry about greetings, for instance. There are numerous ways to say hello in English. But the LLM can understand and match the context of a question to the context of a response, which is incredibly powerful compared to the time of building the initial chatbot.

This breakthrough brings immense possibilities, especially for transactional interactions. When you ask a question and I provide a response, that process can be reused by an organization. The optimal way to make this feasible is to use generative AI in the loop, which assists in creating and surfacing the response for others who may ask similar questions.

Scott King

In your chatbot, were you manually mapping the intent the first time you saw it? Did you have to adjust your bot and edit that?

Julie Mohr

Some of it was manual. When I uploaded, because you can input some information into the system initially, some of it was already matched. The difficulty arose when people, whom I encouraged to try the bot, started asking questions that were out of scope. They posed queries in various ways that the bot didn’t yet comprehend. Mapping intents and responses was an immense task for one individual. But, if you have a team, you can get it running quickly. I managed as a single person to upload the same body of information I had before. Additionally, I pointed it at specific websites. The difference was substantial. Previously, it would respond with, “I don’t know.” Now, it attempts to answer and can point to the source of the information, providing a trail of its thought process. Instead of a canned response, it’s generating a response in natural language. It’s a transformative approach to knowledge management. The bot can now even simplify language upon request, something previously impossible. The capabilities we see with this in knowledge management are incredible. It is evident in the industry, where we see numerous vendors embracing generative AI for knowledge management.

Scott King

Do you think organizations with established chatbots or conversational products are at an advantage? They’ve had these for years and a dedicated team behind their development. How do these organizations compare to those just discovering generative AI, seeing its potential but lacking the teams or resources? Are the two on equal footing, or does understanding intent give the former a head start? Can the latter catch up if they are indeed lagging behind?

Do you think organizations with established chatbots or conversational AI products are at an advantage?

Julie Mohr

Before the advent of generative AI, adopting a chatbot could take up to two years to deliver a confident product. Now, the process is significantly faster, and the outcome is transformative. It’s not a simple comparison from an adoption standpoint since the wait time is considerably reduced.

The differentiating factor here is the quality of data used for training the LLM or fine-tuning it. A well-maintained, current, relevant knowledge base can make your generative AI adoption profoundly effective. This could enable your team to focus on new types of knowledge, perhaps storytelling, or tacit knowledge that’s not easily documented.

Most organizations focus on explicit knowledge in their knowledge management practice. However, tacit knowledge—the knowledge we gain through experience that often goes undocumented—contributes significantly to our experience. By focusing more on this tacit knowledge, an established practice can magnify its impact across the organization.

Certainly, this will decrease the burden of maintaining and creating knowledge, making it more impactful overall. Organizations without a knowledge management practice may face difficulties. If you lack quality data, fine-tuning the model for your specific environment could be challenging.

The key to success is having reliable data for training. While you may not have an exceptional knowledge management practice or chatbot today, reliable data sources can expedite the adoption process. Given good data, you’ll be able to overcome the gap much faster than previously possible.

Scott King

I’m about to demonstrate an employee’s experience navigating our internal SharePoint site. Here we have demo documents for human resources. Suppose an employee has a question, they might feel compelled to review all these Word docs for an answer. Let’s say, I’m inquiring about a maternity leave policy. Despite my personal circumstance, where this isn’t a current concern, it’s a common query for others. So, navigating this, I find the relevant document and need to thoroughly read it to understand how it applies to me.

Julie Mohr

Scott, you’re presenting Word documents. It’s quite difficult to find a specific answer within them due to their hard-to-navigate nature.

Scott King

Word documents exist either on our local systems or in OneDrive. Many of these documents may not be regularly updated. The title of the document often guides my search for information. For example, ‘Jury Duty’ or ‘Parking’. ‘Paydays’ is another term that may not immediately come to mind when looking for information about payroll or paycheck. Thankfully, advancements in Natural Language Understanding (NLU) interpret the user’s intent, translating ‘Payday’ to ‘Payroll’ or ‘Paycheck’, making the search more efficient.

When I interface with an LLM, say Krista, I can ask questions about sensitive topics like maternity leave. I’ve taken all those Word documents and consolidated them here. This is a front end to all the Word documents, making it easier for employees to find information.

Suppose I have a question for HR. Typical scenarios involve asking HR representative questions, similar to IT support desk inquiries about internet connectivity at a remote location. In this case, I will ask, “Do we receive maternity or paternity benefits?”.
The response is, “Yes, employees may be entitled to maternity benefits under the Family and Medical Leave Act.”. I wouldn’t have searched for that information on my own. The next question becomes, “Do you want to request leave?”. I confirm that I want to inquire about maternity leave.

I’m a full-time employee, so I decide to take 40 hours off. It’s impressive that the system can look up my leave balances from the payroll system. Not only does it answer my question, but it also checks the payroll system to see if I have enough time off to take maternity leave. According to the system, I have 64 vacation hours, 40 sick leave hours and the option to take 12 weeks of maternity leave plus vacation time. This was new information for me, as I wasn’t aware that I was entitled to 12 weeks of leave.

Julie Mohr

I must say, Scott, 40 hours is not nearly enough.

Scott King

That’s my expectation because I’m merely driving to and from the hospital. It doesn’t inconvenience me. It’s a completely different scenario for mothers. Then it asked me for the maternity leave start date. That’s nice. I plan to start this coming Monday.

Upon selecting the maternity leave start date, it feeds that into the payroll system. It indicates that I will be absent, notifying your manager and initiating the Family Medical Leave Act process. It’s impressive how it ties it all together. It seems as though your knowledge management processes could be enhanced with generative AI. But if AI can assist in the process, I believe it is equally or more valuable. What’s your take, Julie?

What is your take on how Generative AI models will assist knowledge management processes?

Julie Mohr

Examining the responses, we’re providing the answer. Documents and knowledge articles can be challenging to read. Rather than copying and pasting a specific paragraph from a document, we’re offering a direct answer. For instance, you might hear, ‘Scott, you get 40 hours for maternity leave.’

Scott King

I suppose I should adjust my script. 40 hours, so 12, right? Perhaps I’ll request 120 hours next time.

Julie Mohr

Indeed. This approach improves the transactions significantly. In the background, you haven’t had to write that script, or create a human-like response. Chatbots in the past often referred to a knowledge article or a document, or if they didn’t, a written response was required within the system. This method is more conversational. Transactions will feel more personal—they’ll use your name, and say, ‘Based on information from this system, you’re eligible for XYZ.’ Even though it’s automated, it’s human-like.
This is what generative AI brings—the ability to comprehend human language means the responses from a machine will seem more human within those transactions.

Scott King

Conversation is the initial form of communication, preceding written languages. If you can interact in natural language with a machine comfortably, things will progress faster. We should be able to experience better support at work and better vendor interactions.

Consider knowledge management, focusing on customer support. The turnover in these organizations is high due to the difficulty of the job, as the information is scattered. In this context, a tool like this eases the process for all parties involved.

Julie Mohr

Absolutely.

Scott King

Okay, we discussed imperfect knowledge. We should have solid knowledge management systems, or at least some type of portal or documents—start somewhere. Then, let’s consider data governance. With all this in a private environment, you probably don’t need to worry about data governance. What other controls would be appropriate outside of data governance? Anything come to mind?

What other AI controls would be appropriate outside of data governance?

Julie Mohr

There’s a philosophy we need to discuss—the concept of perfect versus imperfect knowledge. Organizations often view sharing imperfect knowledge as a risk. That’s why we spend so much time perfecting knowledge. It needs to be exact before we share it.
The time it takes to perfect knowledge is often wasted time and effort. One problem with generative AI, which gets a lot of press, is around hallucinations. What happens if it surfaces imperfect information?

Humans have given me imperfect answers over the years. Perfection should not be our goal. We should treat all knowledge as being imperfect. Even if you get an answer from someone and it doesn’t work, it’s not a trusted answer. You need to ask someone else.
If we treat knowledge as imperfect, we can provide more governance in the environment. Suppose I’m the first person that created the draft by using the LLM. In that case, I can create a knowledge article from this transaction, which should go into a state that tells everyone who surfaces that knowledge that it’s a draft. You’re essentially communicating to everybody, “This is imperfect knowledge.”

If you use it and it works, we can change the status now to “This has been validated.” This is very much the KCS approach—knowledge-centered service approach to knowledge management. But once we change that state, we’re essentially saying, “Look, we created a draft, we’ve used it a second time and it worked. Now we know that this can be used by a broader audience so that you can increase the visibility.”

While knowledge is in this imperfect state as it’s being created and generated by generative AI, we keep the visibility restricted until it’s been used a couple of times by our trusted people, and then it can expand its visibility. Instead of thinking of controls where it’s got to go through this lengthy process before it can be used by anyone, we’re perfecting it; we want to build a process that uses all knowledge as imperfect and as we’re interacting with it, we’re improving it. We’re validating that it works, and we can expand its visibility. So that’s a control that’s built into the process. As we’re using knowledge, we’re perfecting it, as opposed to spending all this time and effort to perfect it before it can be shared.

Scott King

Indeed, there may be a false expectation regarding the trust factor. Chasing perfection indicates a lack of trust. Labeling something as a draft gets it out there, allowing others to gradually build trust. No one is going to switch this on from zero to 100 immediately. There must be a gate or human in the loop concept to verify its effectiveness. Otherwise, this sets you up for failure, as there is no certainty. Comfort with the draft concept and building trust over time is crucial, particularly with regards to hallucinations. Many questions arise about hallucinations, but limiting the corpus of data to your own systems increases comfort.

Julie Mohr

Organizations relying on less than reliable data for fine tuning will experience more hallucinations initially. Those continuing to use outdated waterfall practices will prolong this phase unnecessarily. I advocate the agile knowledge management practice with trusted data as an input to an LLM as the accelerator. Without this, if you’re in a waterfall environment with poor data quality, your journey with generative AI will be extended. The mistake lies in trying to create perfect knowledge. This doesn’t harness the power of generative AI. Insisting that generative AI must adapt to our outdated practices is not progressive. Agility is crucial in the modern enterprise. We need to be responsive with diversified decision-making. It’s not about standardized questions and answers anymore. Most of those answers are found in FAQs on websites. It’s the new queries coming in that we’ve never encountered before.

Scott King

All right.

Julie Mohr

Perhaps we introduced a new product with an issue and customers are expressing dissatisfaction. Knowledge needs agility. It needs to match the pace of the market, the industry, and the company. This is not a waterfall practice; it requires agile knowledge management.

Scott King

I envision two sections on the website. The FAQ, as you mentioned, and another section for questions posed today which are not on the list. When the internet emerged, websites were able to catch up. The same happened with mobile adoption. However, I don’t believe AI offers the same catch-up period. Hence, striving for perfect knowledge will only delay the inevitable. Hopefully, this resonates with many.

Leaving aside the hallucinations, which I assume you get asked about often, what are companies or individuals inquiring about this process?

Julie Mohr

The question often arises: how do we leverage generative AI in a knowledge management practice? This emerging field is in flux, with numerous new startup companies and organizations. Companies have experimented with ChatGPT, even before the release of versions 3.5 and 4.0. The thing that I think is the most risky is not embracing it. Employees are reading all the hype and they’re trying it. It’s easy enough to go and explore and start using GenAI in an unprotected environment outside of your firewall. So by beginning and exploring this as quickly as you can with your vendors that are trusted, with your data that’s already got the tight controls around it, you’re likely to get the best benefit from utilizing generative AI as quickly as possible. Where you want to be cautious, we have had a lot of companies interested in building their own LLMs and that’s a whole new level of complexity because first of all, skill sets are in great demand around generative AI.

And most of the people that have a broad amount of experience are going with vendors and they want to be leaders in the market. They don’t want to be working for smaller companies. They know they can get really high salaries by going externally. So there’s a knowledge gap right now around the skill sets around training and LLMs. So smaller to mid-size organizations are going to struggle with building an LLM internally just because of the skills gap problem. If you don’t have the right skill sets, now you’re introducing risk because maybe somebody configures it in the wrong way and now your data is being shared in a way that you don’t want it to. So building your own is definitely a little bit more risky than going with a vendor that has a proven track record for securing that data and protecting your information.

Scott King

Large IT organizations often reflexively consider building solutions in-house. However, as with payroll or CRM systems, I don’t advise anyone to construct their own LLM. It involves heavy lifting, high expenses, and complexity.

Julie Mohr

That raises a significant point: expense. This industry represents an expensive frontier, with the cost of computing alone heavily impacting many solutions. Depending on your chosen vendor, that cost is either included in the product—raising its price—or requires a separate agreement with the LLM provider for computing expenses. We’re still figuring out these details. The transactional aspect, the CPUs, and GPUs needed to generate responses all add to the significant cost and environmental impact.
This frontier is exciting, but we must acknowledge its current high cost and environmental impact. Many vendors focus on green technology and planet conservation rather than just expense. How this aspect evolves in the industry remains to be seen. Pursuing this path is undoubtedly an expensive endeavor.

Scott King

I’ve seen estimates online where hosting an open-source model can cost up to $35,000 a month. On the other hand, estimates indicate running ChatGPT can cost about a million dollars a day. Just think about the heat generated by the GPUs and the electricity required to cool them. There’s a significant environmental impact, particularly when you consider the water usage, but let’s see what happens.

Julie Mohr

This all traces back to the question of choosing your Large Language Model (LLM) partner. Depending on the nature of your knowledge, a certain LLM may work better for one data corpus and a different LLM may be more suitable for another set of data. Many organizations are still experimenting to find the solution that best suits them, given the type of data they possess. OpenAI has received extensive media coverage because they launched a surprise at the AI community. Lots of other vendors are striving to catch up. However, you should consider the LLM model itself and whether it uniquely meets the needs of your organization. Partnering with the right companies is crucial, making it a tricky place to navigate at this time.

Scott King

That’s a crucial point. Chris and I conducted a comparison of three different LLMs against identical data. The Google model surpassed the OpenAI models, particularly in queries about tabular data – its performance stood out.

We dedicated an entire episode and compiled a paper on this subject. Anyone interested should definitely explore the test results, which are openly available, and are quite fascinating. Thank you, Julie, for this engaging discussion about agile knowledge management and the potential influence of GenAI.

We touched on the environmental impact as well – as a Dallas resident where it’s generally hot, I maintain a keen awareness of heat. I appreciate your time and am grateful for your participation.

Julie Mohr

It has been an incredible experience. Thank you, Scott, for involving me in this.

Our 2025 AI Buyer's Guide is Now Available

Close Bitnami banner
Bitnami