The race to adopt AI has flooded enterprises with chatbots, copilots, and isolated agents. What began as innovation is now sprawling into disconnected tools that fail to produce real outcomes. In a recent conversation with John Michelsen and Chris Kraus, we outlined the key findings from the report “Choosing the Right Agentic Platform.” The report explains what separates task automation from platforms that drive enterprise transformation.
Point Solutions Multiply Problems
Most AI investments today focus on surface-level tasks like answering simple questions, running scripts, or summarizing meetings. These capabilities may help individual roles, but they do not improve business velocity or create lasting value. Enterprises end up with hundreds or thousands of narrow tools. Each comes with its own logic, interface, and failure points. The result is more fragmentation and more operational drag.
Orchestration Drives Real Outcomes
Agentic platforms are not built to automate isolated tasks. They are built to orchestrate outcomes. Orchestration agents span departments, systems, and AI models to complete entire business processes from start to finish. Generating a title document is not enough. A complete process also verifies customer identity, delivers the invoice, confirms payment, and notifies marketing for follow-up. Systems that cannot handle multi-step, long-running processes do not qualify as agentic platforms.
Integration Must Be End-to-End
To support orchestration, integration must carry context throughout the process. Customer identity, pricing, and document types need to flow through every step so all systems operate with the same data. This avoids rework and eliminates manual handoffs. Point-to-point connectors or bots limited to single tasks cannot achieve this. A true platform enables integration across entire outcomes.
AI Must Improve with Use
Generative AI tools can create content and answer general questions, but they are poor classifiers and do not improve over time. Agentic platforms need traditional machine learning capabilities trained on enterprise data. These include classification, prediction, and anomaly detection. The models must also provide confidence scoring and trigger escalation when uncertain. Without these controls, automation either fails or makes silent mistakes.
Enterprise Controls Must Be Built In
Security and compliance features are not optional. Role-based access control, audit logs, and secure workflows should be included from the start. Most vendors provide SDKs and expect customers to build these features. Agentic platforms like Krista deliver them out-of-the-box. This includes user interfaces, chatbot connectors, single sign-on, and full audit trails that track who did what and when. No additional code required.
AI Flexibility Protects Your Investment
Enterprises should never be locked into a single AI provider. An agentic platform must offer an abstraction layer that allows teams to adopt new models over time. As better AI becomes available, companies need to swap it in without rewriting workflows or retraining staff. Krista allows this flexibility, making it easy to upgrade AI capabilities without impacting the business.
How to Evaluate Agentic Platforms
The report outlines six essential capabilities every agentic platform must demonstrate:
- End-to-end outcomes instead of one-off task execution
- Conversation-driven workflows that use plain language instead of code
- Trainable AI models based on your data with confidence scoring
- Human-in-the-loop capabilities for escalations and input
- Enterprise-grade security and audit controls
- Compatibility with any AI model, now or in the future
If a vendor cannot demonstrate all six in a live demo, they are not agentic.
Download the Report
The full report, “Choosing the Right Agentic Platform,” explores each of these requirements in detail and includes side-by-side comparisons of leading vendors. Visit krista.ai/choosing-the-right-agentic-platform to read it and make informed, sustainable decisions about automation and AI.
Links and Resources
- Choosing the Right Agentic Platform, Krista Software
- You DO Want AI Trained on Your Data, Krista Software
Speakers
Transcription
Scott King
Thanks everyone for joining us for Stop Agent Sprawl: The AI Agent Sprawl. I’m Scott King, the Chief Marketer here at Krista Software. I’m joined by John Michelsen, our Chief Product Officer. Hey John, how are you? And we have Chris Kraus, my partner on the Union Podcast. Hey Chris. Thanks everyone for joining us. We’re here to discuss choosing the right agentic platform.
Scott King
This is a paper we put together. If you RSVP’d for this event, I emailed you a link to download it. Hopefully you got it. If not, you can grab it off our website. Go to krista.ai, check the Resources section under White Papers, and you’ll see it listed first. You can read ahead or listen to us discuss the research and read it afterward.
If you have a comment or question, just ask—we’ll try to get to it. If we don’t, we’ll follow up afterward.
To kick us off, John, can you give us an overview? There are a lot of reasons we did this research. We want people to seriously consider how they’re going to replatform their business. What would a small to mid-market enterprise look like 24 months from now if they did the research, made the decision, and started implementing AI strategically
John Michelsen
Yeah, Scott, this is a big deal. It was the primary motivation behind putting together a well-researched piece, because the transformation we’re seeing in Krista customers is remarkable. When you play it out over 24 months—or even sooner—you realize the world is changing fast.
At this point, it’s not even provocative to say that if you’re not a highly competent AI consumer, you’re in serious trouble as a business.
I can give you examples, but you’ll hear plenty from me this hour. Chris, why don’t you pick a customer, describe their journey, and help us illustrate what this means—not just for that business, but for the industry overall? Let’s make it real for the folks joining us.
Chris Kraus
Yeah, I’ve got one that’s interesting because they’re not a huge company or particularly advanced technologically, but they came to us with a real business problem. It’s a title company, and they said, “We want to expand our business, but we can’t afford to hire 10 more title clerks to do it.”
They wanted to modernize, make their services self-service, and reach customers through digital channels—not just through real estate agents they already know. They didn’t want to just double the business and double the headcount.
They asked, “How can we embrace AI to do more with less?” They wanted an agent to learn how to be a title clerk.
A title clerk is someone who gets an email with something like a general warranty deed, a release of lien, or a builder mechanics lien. They identify the type of document and draft it, and then it gets reviewed and sent back. The company couldn’t afford to hire 15 more people to do that.
They wanted to use AI—what we now call an agent—to handle that work. It was a perfect use case.
They were thinking strategically: “I want to expand and use AI to stand apart from competitors.” Because if competitors try to do the same thing with people, they’ll increase costs without improving margins—and may even go backward.
It was really cool to hear them say, “Can an agent learn to be a title clerk?”
Scott King
So with that, Chris, did they identify labor capacity as the big constraint, and they were trying to get around it? Or were they just exploring what was possible?
Chris Kraus
Exactly—that’s what we trained Krista to do.
Yes, labor capacity was definitely the issue. It takes hours, and it’s very tedious. We’ve talked about this before on the podcast—when the same person repeats the same task, they make mistakes.
They’re cutting and pasting from different documents, doing research, and dealing with a lot of manual work that’s easy to get wrong.
You can’t just ask someone to do 50 of these in a day—the error rate will go up. It’s very detailed work. They’re reading legal documents, and that’s not easy.
There might be 17 dates in a document, and you have to pick the right three. It’s extremely detail-oriented.
John Michelsen
Chris, let’s play this out. Here’s a title company that reduced hours of legal review down to five minutes. And more than 80%—sometimes 90% of the time—Krista is handling 95% of the legal work.
Chris Kraus
Definitely. Since they’re expanding to serve general consumers, they’re not going to get perfect data every time. In a perfect world, a realtor would gather all the documents ahead of time and send them over. But now, they want to open their market to everyday consumers—people like you and me.
So they’re dealing with incomplete submissions. It’s not just data in, data out. It’s: data in, identify what’s missing, prompt for that data, then complete the task.
They implemented a smart process. It’s not linear—it’s not A to B. It’s A, then validate, then ask for B, C, and D, and finally create the documents.
John Michelsen
Exactly. That’s a real-world example. No one has perfect data or a clean, consistent process that works every time.
The reason I asked for a case study is because this company is just getting started. There are dozens of types of legal title work they could automate, and they’re already delivering it at 80% lower cost.
Now they have excess capacity and can go to market against title companies that aren’t doing this yet. Even if they offer services at 50% of current industry rates, they still make more money.
And if this were an isolated case, it wouldn’t be a compelling story for this podcast. But it’s not.
Every successful Krista implementation is doing far more than people expect.
There’s this weird polarity in AI projects—they’re either overhyped science experiments, or so basic you’d get better answers from a Magic 8 Ball.
Chris Kraus
Yep. “Ask again later.”
John Michelsen
Exactly. At my age, I remember the kinds of questions I asked it as a teenager—you can probably imagine. But you had about as much luck with that as you do with some of these over-the-top science projects.
On the other side, some efforts are so tactical they only save someone 15 minutes of work here and there.
But companies that are thinking differently—like the title company you just described—are going to dominate their markets. They’re not going to be on the losing end of this. If anything, they’ll come out as clear winners.
As the founder of a company that’s committed to our customers’ success, I want to see that happen. But this brings up a bigger point: we need to help educate the market.
To be successful, you need to develop a core competency in adopting change—especially technology change, and more specifically, AI.
That’s how you move from the 85% of AI projects that fail to the 15% that succeed and lead. If you’re not doing that, you’re betting the rest of the market is asleep.
I’m not trying to be provocative. At this point, I don’t even think it is. We’ve seen too many real examples. Chris just gave one of dozens we could share.
It no longer makes sense for someone to say, “This is a fad” or “It won’t affect us.” The companies doing that—running pilots, waiting it out—will likely be working for other companies a year or two from now.
John Michelsen
I’m just being candid.
Scott King
Right. If you don’t know that you need that core competency and you’re not continuously evaluating it, chances are someone next to you already is. They’re on their third or fourth try.
They’ve already learned from their mistakes—maybe it was a science project, maybe it was a minor automation—but they’ve started. And meanwhile, others are still saying, “Yeah, I guess we ought to try something.”
John Michelsen
The reason we introduced the topic this way—and we should probably set a better agenda—is that we feel a responsibility.
We’re a mature product in a very immature space. And we’ve helped lead a lot of these early successful projects. So we want to help people understand how to platform themselves to succeed.
That’s what led us to write the paper and put all this work together.
We’re telling the market there’s a critical technology decision they need to make. And most people know that—they understand they need to adopt AI strategically.
But how they go about it will determine whether they succeed or not.
The challenge is, many of them don’t know enough about AI. They might’ve played with GPT in a browser, and even their tech team is only slightly more advanced than that.
So we’re stepping into this moment—where there’s both a crisis and an opportunity—and the audience isn’t fully prepared to evaluate the decision in front of them.
Scott introduced the main theme of today’s session. Now, you’ve heard the 10-minute preamble on why we believe this is such an important and urgent topic.
Scott King
What we did started as a two- or three-page response to a customer. They had sent us questions like, “How do I do this? How do I do that?” I looked at it and thought—it could be more comprehensive.
So we expanded on the questions and wrote a document that outlined everything companies should be considering. Then we thought, let’s compare how other vendors approach these issues.
We ended up identifying over 40 different requirements. You may not need all of them for every workflow or automation, but you should at least consider how you’d handle them in the future. A decision today can lock you into limitations tomorrow. You need flexibility.
We looked at high-level capabilities, machine learning, and generative AI—which we say is only about 5% of the solution. That’s just summarizing the data you provide.
Then we asked: how do you build the kind of velocity John mentioned earlier? How easy is it to make changes? Because your business is constantly evolving. It’s different today than it was yesterday—and will be different tomorrow.
Hard-coding an agent into a rigid workflow is risky.
We also reviewed integration—how your systems interact—and how people interact with those systems. What we found with many so-called agentic platforms is that they have no concept of how to involve people.
If you need help, there’s no way to trigger a human response. Remember that Regis Philbin show? You’d have to “phone a friend.”
Chris Kraus
Phone a friend.
Scott King
Exactly. There’s no “phone a friend” functionality. Your data and AI won’t cover every scenario. Someone will need to supplement answers with tribal knowledge—“Oh, this is how that really works”—and that input should be added to your model.
We looked at integration, usability—from both the user’s and developer’s perspective. A lot of platforms claim usability, but when I dug into the research, it was all developer usability. Everything was buried in SDK documentation.
Then there’s enterprise readiness. We hear stories all the time: someone gets an answer from data they shouldn’t have had access to. You run into privacy and security questions constantly.
One customer told me, “I don’t want to connect OpenAI to my cloud system because I don’t want my data in the cloud.” I told them—it’s already in the cloud.
So we linked every one of these requirements to the vendor websites. You don’t have to take our word for it—go read for yourself. Then decide what really makes the difference.
Some companies with dedicated IT development teams might choose one platform. Others need to consider the resources required more carefully. Of course, we think Krista is the best way to do it.
John Michelsen
We should go through each of those sections one by one. But a couple of high-level thoughts—Scott, you already brought one up.
It’s strange. We’ll tell a customer, “These are critical capabilities you need,” and their IT team will say, “We can already do that.” Technically, yes—they may not have adopted new tech in five years, and it would still be true.
But that’s not the point.
This isn’t a binary question of whether something is feasible or not. With unlimited budget and a full team of developers and data scientists, you can build anything Krista does.
Let’s be clear about that.
The issue is velocity—the speed at which you embrace change. That’s what has to be your core competency. The real question isn’t whether your team can do it, but whether they can do it quickly, consistently, and without draining resources.
As business leaders, we have to acknowledge that inertia is real. I’ve built several teams, and you always have to fight the mindset of “this is how we’ve always done it.”
What got you here won’t get you there.
John Michelsen
We’re at a point where many dev teams, IT groups, and even Krista competitors refuse to accept that it’s not enough to say, “Yes, we can do it.”
You have to do it with a speed, ease, and consistency that you’ve never achieved before—and that no product has enabled until now.
That’s the new bar.
Scott mentioned human-in-the-loop and user involvement. Most platforms out there—if you’re squinting at the vendor list—don’t even have a user database.
Scott King
Is it small? I can make it bigger.
John Michelsen
They don’t have the ability to audit or automatically log transactions. There’s no transparency in how the AI makes decisions. No explanations, no traceability. I could go on—there are so many things left for the customer to build from scratch, and you don’t have that kind of time.
Especially if you’re in the same market as someone already doing what that title company is doing. You shouldn’t be building plumbing or figuring out how to make basic AI work while someone else is leaping ahead.
That’s what makes this message so hard to communicate to customers.
Yes, other platforms can do some of these things. But if you’re in a race and you jump into a car that tops out at 80 mph while others are doing 120, you won’t win.
Every one of us is now choosing a platform—a race car—that will carry us through the next 24 months while everything changes. If you randomly choose the slower one, or worse, do it unconsciously, you’re putting your future at risk.
What’s frustrating is when customers say, “Well, we already have XYZ platform, so I guess we’ll just use that.”
Is that because it’s the best choice—or is it just inertia?
Or maybe someone on the team downloaded something, started playing with it, and now everyone’s defaulting to it. That’s not a strategy. I’d hate for luck to decide the fate of my company’s AI journey.
John Michelsen
This decision matters—regardless of what business you’re in. Even in our own business as a software company, we’re being disrupted. Our methods are changing radically. This kind of disruption doesn’t usually affect tech companies as quickly, but this one does.
So Scott, why don’t you walk us through each of these categories. Then Chris and I can provide some color on how we scored them and what we meant by each.
We want folks to form their own opinions. No serious person is going to just take our word for it.
And yes—some of the scoring may look surprising. Like machine learning capabilities being green for us but all over the place for others. We’ll touch on that at a high level. But if you’re evaluating platforms, get into the details. The differences are real and meaningful.
As I said earlier—I’ve been doing this a long time. Our salespeople love to open meetings with, “John delivered his first NLP-based AI software in 1992.” Thanks for the reminder. Yes, I’ve been at this a while.
We now have a relatively mature product in a very immature market. Krista was founded in 2019 with a clear vision for what the world is trying to do today: orchestrate human, system, and AI capabilities to drive outcomes—fast.
That’s what we built the platform for.
Scott, go ahead and introduce each area, and then we’ll provide some deeper explanation. That way people can see how we went beyond checkboxes to real, meaningful comparisons.
Scott King
There are a lot of claims out there. As a product marketer, I read many of these websites and sometimes I just laugh—because I know exactly how we used to write like that.
Let’s start by talking about knowledge agents. I think it’s the easiest example for people to understand. A knowledge agent connects to your databases, SharePoint, knowledge base, or documents to answer questions.
But I want to go back, John, and cover how you keep that updated. That gets into conversation agents. So I’m going to flip the order—let’s first talk about agent capabilities, how they work, and then we’ll cover how you keep them up to date with real-time data, like this session we’re recording right now, which will be integrated into our knowledge base as soon as we hit “end.”
John Michelsen
A knowledge agent is somewhat analogous—but not identical—to what the market calls RAG, Retrieval-Augmented Generation. Proof, by the way, that I’m not the only one who struggles with naming things.
As Scott said, a knowledge agent connects your systems and retrieves relevant information. It sounds like a simple use case—but the moment you try to operationalize it, it becomes complex.
No one has a ready-to-go dataset for this. I’ve never met anyone who does. So you need a knowledge agent that helps curate content as it’s deployed, and that connects systems and people to continuously curate that content over time.
It needs to understand dates, handle security properly, and provide contextual answers. Even a simple question like, “Do we get Monday off?” depends on where you work, what type of employee you are, if you’re hourly or salaried, whether you’re under a union contract, and even what country you’re in.
What seems like basic reading comprehension turns into a complex challenge.
Delivering all of that is what creates velocity. You might think you can piece this together with a stack of tools and get a basic step or two working. But as soon as you try to go beyond that…
John Michelsen
You’re looking at a big software build. That’s where most people struggle.
Chris Kraus
One major issue is role-based security. Not every document should be visible to everyone. You might ingest data from SharePoint or your website, but access needs to be context-specific.
HR managers can see documents employees can’t. Sales managers can view multiple deals that individual reps cannot.
RAG isn’t just about pulling in lots of information—it’s about securing it so the right people see the right content, and no one sees what they shouldn’t.
And then there’s the need to continuously feed the system. Every sales meeting, every new SharePoint doc—it all needs to be ingested regularly so the data doesn’t go stale.
Some say, “Well, we’ll just fine-tune the LLM.” But can you really do that nightly? And even if you could—how do you secure the data? How do you prevent hallucinations?
There’s a lot happening in the background. That’s why we built the platform—to solve these problems so you don’t have to. It looks simple on the surface, but underneath, it’s complex.
How do you ensure the LLM understands this is your data—not something it trained on Reddit at 2 a.m.? How do you secure the questions and answers?
John Michelsen
Exactly.
Scott King
Remember the example we used to give, Chris? A seemingly simple one—travel per diem. “How much can I spend per day?” Well, it depends on who you are. At our previous company, some of us had a $50 limit, while others—like John—had a much higher allowance.
Chris Kraus
Right. Your per diem was based on your employee band, hotel rate eligibility, and whether you could fly business class versus economy—all those variables.
Scott King
Exactly. So how do you limit the answer appropriately?
John Michelsen
If you want to start a mutiny in your company, just give everyone access to the full table and let them see what everyone else gets.
Let’s move on to conversation agents. As Scott introduced, the idea is to capture all the conversations happening across your organization. This very meeting is an example.
All the Zoom and Teams meetings, email threads, chats—there’s a huge amount of tribal knowledge being created and shared. I spoke with a customer earlier today who’s already sending their emails to three different SaaS systems just to try to extract some value. That kind of siloed approach is going to drive people crazy.
We’re doing work for one part of their business and were just introduced to another. They said, “We do that by doing this”—broadcasting emails across three systems. Most customers don’t want us to even touch email unless it’s controlled. But these guys are just pushing it everywhere, hoping to get something useful.
I feel for teams trying to figure this out without a strategic approach. That’s where conversation agents come in—they help you understand your organization’s actual state.
No single data source holds 100% of the truth. It might be outdated, incomplete, or missing other perspectives. When you ask, “What were our last interactions with customer Acme?”—the answer could be in an email, a ticketing system, a recent meeting, or a proposal from a month ago.
No one system has it all. But Krista spans those systems and brings everything together.
Be careful not to isolate information. If you just collect meeting transcripts in a silo, that knowledge is stuck. It can’t be reused across your organization. One of the most important things Krista does, in my view, is linking orchestration to outcomes.
John Michelsen
Krista’s enterprise-level understanding isn’t just about answering questions—it’s about taking action.
Sure, sometimes people ask questions out of curiosity. Maybe 30% of the time. But 70% of the time, they’re asking to achieve a specific outcome.
I’ve never seen someone seriously digging for an answer without trying to do something with it.
When Krista’s orchestration is layered on top of that context, you can act at machine speed. That’s the real advantage.
Without orchestration, your system just resolves curiosity and then sends people off to figure things out manually. But what you want is for the machine to handle the process.
For example, if someone asks, “Is this customer at risk?”—Krista can automatically add them to a care list, trigger a follow-up email, or escalate an internal workflow.
You’re not looking for people to remember what they should do. You want the machine to do it for them. And connecting enterprise-level knowledge to that orchestration is what makes it possible.
So let’s talk about machine learning. Scott, there’s a lot of red in the comparison chart.
How do we justify that?
Scott King
We were looking at machine learning capabilities in the chart—things like classifiers, predictors, anomaly detection, custom model training. Essentially, it’s about giving you access to a data scientist without needing to hire one.
Confidence scoring is a big piece, and so are transparency reports—which you mentioned earlier. You need to understand why the AI made a certain decision.
This comes up a lot online. There’s a growing interest in the idea of an agentic control tower—or IT control tower. When an inquiry comes in, or a business process starts, you need to identify what it is. Is this a support ticket? A warranty claim? A refund request?
You need a classifier to determine that. But most other platforms don’t even have a concept for this kind of orchestration. That’s because most of their use cases are narrow and role-specific—“this type of user gets this type of agent to do this one thing.” And they call that “agentic.”
You’re going to end up with thousands of these. If you’re a 500-person company, you could easily have 2,000 of them—each built on different platforms or SaaS systems. That’s the sprawl we’re talking about. If you need all these capabilities, you’re creating technical debt. And when it comes to machine learning, most of these platforms don’t even mention it.
John Michelsen
And there’s a reason for that. Most platforms on this list—and even others we didn’t include due to space—are just thin veneers over generative AI.
Don’t get me wrong—GenAI is groundbreaking. It has driven incredible outcomes for Krista and others. But it’s not a good classifier. That’s not its strength. It’s a generative model—not built for classification.
Yes, you can ask it to classify things, and it might do an okay job sometimes. But the fundamental issue is: it won’t get better. If it’s 60% accurate today, it’ll still be 60% tomorrow. It doesn’t train on your feedback. It doesn’t learn and improve.
Classification, on the other hand, is a different kind of AI. It’s how governments flag fraudulent international trade transactions. My prior startup used classification to detect mobile threats and malicious apps. It’s a proven, mature AI capability.
If we want machines to take over decisions humans make today, we need to move at machine speed. That’s what Krista enables.
Krista learns from your people. If it sees a decision that’s similar enough to what it’s been trained on, it can make that decision on their behalf and keep the process moving. That’s how you build velocity—not just speed over your current manual process, but faster than your competitors.
Whether it’s predicting numbers, identifying anomalies, or classifying requests—these capabilities are essential. And without transparency or confidence scoring, you don’t know when to let the machine decide or when to bring in a human.
From day one, we designed Krista to be a great data scientist. And we designed it to say, “I don’t have enough information to be confident here.” That’s exactly how people make decisions.
We modeled this after human decision-making. You gather some information, lean a certain way, and then decide whether you’re confident enough to act—or whether you need input from someone else.
That’s when I might call on Chris or Scott: “I need help deciding this or I need more data before I act.”
I can’t emphasize this enough—the core machine learning capabilities woven into outcome orchestration are the most important part of this. Go back to Chris’s title company example. The first step is: is this a warranty deed? That’s classification.
Same in customer support. Is this a return request? A billing issue? A product inquiry? Krista can fully automate many of these, but only if the classification is accurate.
If you’re relying on GenAI for that, and it’s only right 70% of the time, you’ll always need a human to triage. That defeats the purpose.
The difference is this: GPT is pre-trained. It knows what it knows. You need a model that learns from your data.
That’s why we did a podcast a couple weeks ago titled, Yes, We Are Going to Train AI on Your Data. Because we are—and you want us to. We’ve had customers send us security review forms from their IT teams asking, “Can you confirm you won’t train AI on our data?”
And I think—how do I even answer that? Of course we’re going to train on your data. That’s why you’d choose Krista. But obviously, not for third parties. Never for external use. Only for you.
Scott King
Like an RFI or security questionnaire.
John Michelsen
All right, we can move on from machine learning—but if you take anything away from this, let it be this: these capabilities are a real unlock. Everyone in this session should take that to heart.
Scott King
Exactly. You’re routing work to agents—either a machine does it, or a person does.
So Chris, let’s talk about integration and orchestration. You’re probably the best person to walk us through this, right? You’ve been doing this since WebMethods, LISA, Krista, and Worksoft too. Walk us through the integration and orchestration capabilities. I also want to touch on the concept of “wait,” because not all processes happen instantly. When we get there, I’ll jump in with an example of how one of these vendors would handle it.
Chris Kraus
What’s interesting is how people approach integration. Old-school folks like me think about connecting data across 15 systems so you can do something meaningful.
But more recently, the mindset has shifted to RPA—robotic process automation—where “integration” just means replaying a transaction on a screen in one narrow area. That’s not real integration; that’s task automation.
With the title company, they deployed in layers. First, we identified and created documents. But that alone wasn’t enough. Just because they didn’t hire more title clerks doesn’t mean they had people for billing or receivables.
So the next step was: after delivering the title document to someone like you, Scott, did we also bill you the right amount? Did you get the invoice? Did you pay it? Can we follow up with more services?
It started spanning multiple departments.
Orchestration, in their case, wasn’t just about getting the document right—it was about completing the financial loop. They couldn’t afford to burn hours on manual updates: typing into spreadsheets, feeding data into billing systems, verifying payments.
Instead, orchestration meant connecting email, CRM (to verify the customer), AI (to generate documents), and the accounting system (to finalize and send the bill).
It had to all work together.
That’s why orchestration can’t be just a series of simple tasks. Each of those—customer verification, document generation, invoicing, marketing follow-up—might be separate tasks, but true orchestration connects them end-to-end.
You need integration at every step, and data needs to persist throughout. Every system needs to know who the customer is, what their contact info is, what type of document they requested. Otherwise, how do you know who to bill and for what?
That’s the shift. It’s not about making one widget faster—it’s about combining six pieces and building something meaningful. That’s what end-to-end orchestration means.
John Michelsen
All right. We’ve all experienced being on the road with a traffic light every quarter or half mile. It doesn’t matter how fast you drive between the lights—you’re still going to end up waiting at the red light. You can either burn energy trying to beat the light or save fuel and drive reasonably. But either way, you stop.
That’s the point: optimizing a single step in a system often accomplishes nothing except wasted effort.
We’ve known this for a long time. Concurrent engineering principles have always told us this. But most automation platforms still claim to be outcome-oriented, yet devolve into just delivering isolated tasks.
They usually break down in two areas: human-in-the-loop and system integration. So let’s dig a bit deeper before we move on.
We need a way to define outcomes without code. Otherwise, we’re stuck writing requirements and handing them to a developer to build. Connectivity to systems is a technical skill—but business processes change far more often than your systems do.
You don’t change your CRM every six months. But the way you run your business and how you want to use that CRM—that changes constantly.
Connecting Krista to a system is usually a one-time setup. But once that’s in place, the integration is driven through natural language. We literally use conversation as our programming language.
You inform people. You ask people for things. You ask systems. You ask Krista—which is how you invoke AI. That’s it. There’s nothing more complicated than that.
Now, if your process is simple, Krista is extremely easy. But if your process is complex, Krista still has to represent that complexity. We’ve removed as much of the technical burden as possible, but Krista is still capable of handling long-running, system-initiated, and complex processes better than anyone else.
And we’re far ahead in our ability to represent those kinds of workflows.
Every agent runs on the same Krista platform. Each one comes with a lightweight orchestration capability, and they’re all accessible from a single screen. You’re not switching products, tools, or programming languages.
That level of integration and consistency is what gives us the velocity that no other tech stack can match.
Scott King
Right—and on the connectors, John, you mentioned this, but I’d add a warning: just because a platform claims to have a connector doesn’t mean it works the same way.
John Michelsen
It’s true. There’s a company on the market—I won’t name them because this isn’t about calling anyone out—but they claim to have 2,000 connectors. What they really mean is they have 2,000 point-to-point integrations, each built to do one specific thing.
If what you need isn’t exactly what those connectors were built for, you’re out of luck. You can’t expand them.
That’s why you need to separate business logic from system connectivity. Otherwise, you end up stuck with a brittle architecture.
Scott King
Exactly—especially when Chris mentioned state earlier. You need to retain state throughout the entire process. If you’re stitching together eight different connectors in one flow, you’re just hoping it works. And sure, maybe there is a connector for that—but it doesn’t mean it’s usable.
We’ve got about seven minutes left, John, and we still need to cover enterprise readiness.
One of the most interesting things I came across was that one of these platforms had a nine-minute cap on workflow duration. Anything longer, and it just stopped. There was no way to reinitiate it. That caught my attention.
Also, there’s the logging, traceability, role-based access control—which we’ve touched on.
What stands out to you most about enterprise readiness? And what would you caution people to seriously reconsider?
John Michelsen
To summarize quickly and leave room for questions—many vendors say they offer features like role-based access control. What they mean is: you can build it.
That’s a big difference.
Nearly every product we researched has a chatbot SDK. But there’s a world of difference between a chatbot SDK and an actual, functional chatbot.
I was just on a call a couple of hours ago with a customer who wants to transition off-hour support to Krista. During business hours, they want Krista to triage the issue—if Krista is trained on the topic, she answers it. If not, the request is routed to a live agent.
But during off-hours, it needs to behave differently—there’s no one to escalate to.
We hadn’t deployed their chatbot yet. But while we were on the Teams call, Vivek—our customer success manager—created the bot, added it to the workspace, showed them where it was, gave them the URL, and the embed script for their browser. Done. That’s it.
We already have a full client experience. In addition to our omni-channel connectors, there’s no need to build login screens or onboarding workflows. Those already exist.
You’re going to find that with most of these other platforms, you’ll be building those things from scratch. Even vendors with full stacks—where you’d expect this functionality—may require you to use four different products just to build one complete experience.
John Michelsen
We’ve seen that. Customers say, “I thought I’d get all of this from the vendor,” but once they tried to assemble it, they realized they couldn’t make it work.
So, to summarize:
You need a platform that can bend with you, move fast, and help you succeed or learn. It’s not succeed or fail. The faster you learn, the faster you succeed.
Start small, but have a big plan. We think we’re a great partner to help you do that.
Scott King
Chris, you’ve been monitoring the comments—any questions?
Chris Kraus
Yes. I haven’t seen specific questions, but we’ve had a lot of great partner engagement. We’ve got about 29 people live right now.
ablo from sales has commented on several attendees. A lot of people are saying they appreciated the use case we shared, and they agree that to go big with AI, you need several foundational elements in place.
I didn’t see any direct questions for you, Scott—except Andy said, “Great, we should redo this.” Andy Winters said you nailed it—creating technical debt across multiple platforms with slow operations. We probably shouldn’t tell people someone agreed with you, but there you go.
Stanley also agreed with our take on the proliferation and disconnection of copilots and one-off bots. He said it really highlights the fragmentation of the AI OS market. He also said he’s curious how our report addresses interoperability between different AI platforms and how Krista ensures seamless integration that enhances productivity. So he’s asking about outcomes.
John Michelsen
We can hit that quickly—it’s mentioned in the paper. Krista isn’t the last system or AI tool you’ll ever buy. It’s the layer that ties everything together.
Scott King
Give us 60 seconds on that and then we’ll wrap up.
John Michelsen
If you find an AI capability that works well, you just need to make sure Krista can connect to it. That way, everyone in your organization can access and use it. If you want to swap it out later, you can—and your outcomes won’t break. In fact, they might even improve.
Scott King
Thanks, everyone, for sticking with us. If you want to grab the report, go to krista.ai, click on Resources, then White Papers. It’s the first one listed.
No registration required—we’re not going to chase you. We just want you to read it, learn something, and make better decisions about automation and AI.
Thanks again, John. Thanks, Chris. Loved the discussion. See you all later.
John Michelsen
Bye everyone.