AI Agents—Security Asset or Hidden Risk?

February 5, 2025

AI agents become more integrated into enterprise operations, the conversation around data security and privacy is shifting. Enterprises are moving beyond basic concerns about large language models (LLMs) and data leakage—now, they’re asking deeper questions about AI agents running autonomously within their systems.

How secure are these agents?

Do they introduce new risks, or can they improve security posture?

John Michelsen explains how introducing AI agents and an agentic platform can improve business outputs and potentially improve your security posture. 

The AI Security Dilemma 

Enterprises understand that pasting sensitive data into an LLM isn’t secure, but now they’re building AI agents that interact with their internal systems. The fear? Unfettered system access, vague directives, and unpredictable AI behavior. Without proper safeguards, AI agents could introduce risks that security teams have spent years trying to mitigate. 

Three Key Security Considerations for AI Agents 

  1. External Cyber Threats: While breaches caused by external attackers get the most attention, they are not the most common. Enterprises must ensure that any AI-driven system follows established cloud and SaaS security best practices, including SOC 2 compliance and penetration testing. 
  2. Regulatory and Compliance Risks: Many security measures are non-negotiable due to regulations like PCI, GDPR, and internal data residency policies. AI platforms must prove compliance, not just claim it. Without strong audit logs and access controls  AI systems can introduce compliance gaps. 

  1. Internal Data Leakage: The most significant risk often comes from within. Employees—whether accidentally or maliciously—can expose sensitive data. AI platforms that orchestrate workflows properly can actually reduce security risks by eliminating unnecessary system access for employees, ensuring data only moves where it is needed. 

Are AI Agents Just Another Data Security Threat? 

One of the biggest concerns among security professionals is whether AI agents will act like rogue system administrators with unlimited access to enterprise data. The real question they often want to ask but don’t is: Am I a complete idiot for connecting an AI agent to all my systems? Many fear these platforms will extract vast amounts of data without oversight, leading to unintended exposure. 

The reality? AI agents don’t need unlimited access. Mature AI platforms, like Krista, follow zero-trust principles, ensuring that AI agents only access the specific data they need to perform designated tasks. The right AI platform improves security by eliminating the need for employees to have direct system access, reducing human errors and insider threats. 

The Business Model Red Flag 

Another critical question enterprises should ask is: How is this AI platform making money? If an AI vendor offers extensive R&D, model training, and system integration at extremely low prices or even free, the real customer might not be the enterprise—it could be someone else buying the data. As the saying goes: If you’re not paying for it, you’re the product. 

Security-conscious organizations must scrutinize their vendors’ business models. Platforms with transparent revenue models that charge customers for the service—not their data—are more likely to align with enterprise security needs. 

How AI Agents Can Improve Security 

Rather than viewing AI agents and agentic platforms as a security liability, enterprises should consider how they can actually enhance security: 

  • Reduce Human Integration Risk: Today, many employees act as manual integrators between systems, exposing unnecessary data. AI-driven automation removes this risk.
  • Enforce Fine-Grained Access Controls: Instead of giving employees broad system access, AI agents interact with systems on their behalf, enforcing least-privilege principles.
  • Improve Auditability: AI-driven workflows generate detailed logs, making security audits easier and more reliable. 

Final Thoughts on AI Agent Security 

The conversation around AI security will continue to evolve. AI agents, when deployed thoughtfully, can improve enterprise security postures rather than weaken them. The key is to choose platforms that follow zero-trust security principles, maintain clear business models, and provide robust compliance mechanisms. By doing so, enterprises can embrace AI innovation without compromising security.

Links and Resources

Speakers

Scott King

Scott King

Chief Marketer @ Krista

John Michelsen

Chief Geek @ Krista

Transcription

Scott King
All right, John, I want to talk with you today about data security and privacy. A lot of the conversations we’re having, both myself and the field organization, involve feedback from prospects and customers who are exploring AI agents and agentic platforms.

John Michelsen
Mm-hmm.

Scott King
They mostly understand LLM security now. They recognize data leakage risks and know they can’t paste text into an LLM and assume it’s 100% secure. But now they’re building agents that operate within their business, and they’re questioning whether it’s still safe. There’s a lot of fear and doubt about what these agents will actually do. Are they going to operate within expected boundaries, or will they behave unpredictably?

John Michelsen
Yeah. I’ve been pulled into these conversations by our field organization, both pre- and post-sale, because it’s a major concern. It’s unsettling to think about pushing data into large language models that don’t have a perfect track record in data stewardship, copyright compliance, and ethical handling.

Now, we’re giving those same models system access and issuing vague directives on what we want them to do. When you say it out loud, you realize how fundamentally risky some of these decisions are. Let’s give system agents unrestricted access to critical systems, provide them with unclear instructions, and then expect them to operate precisely as intended—despite knowing they don’t always meet expectations.

Scott King
Yeah.

John Michelsen
What’s the security team supposed to do with that?

Scott King
Yeah, security teams have always tried to limit access because it’s a constant risk. Everyone has encountered this. Maybe not you, since you’re an administrator for every system, and I am for a lot as well. But when you gain access to a system, you naturally test it—wondering if you can see certain features or data. That’s a valid concern because of the risk of data loss.

John Michelsen
Right.

Scott King
You don’t want access granted to someone who shouldn’t have it—least privilege, right? It’s a real fear, but how do you handle it?

John Michelsen
That’s right. We need to take a more serious approach to what an agentic platform can and should do with enterprise data and access. The idea of a general agent having significant access without proper guardrails is risky. And by guardrails, I don’t mean relying on a prompt and hoping it’s honored. We need much stronger controls than that.

In fact, we can design this in a way that improves security posture. We’ve put a lot of work into making that possible. I hate to show my age, but the reality is that many agentic platforms are built by people who haven’t had to deploy software in an enterprise environment and then support it for years. They haven’t had to ensure it meets expectations for data privacy and security over time.

Scott King
Do you think that matters? Can’t I just have ChatGPT build a software product for me?

John Michelsen
Scott, I know that’s a softball question, so I’ll double-check my swing. The answer is—ChatGPT can certainly help. But if you’re not in control, then who is? And is that even allowed in your organization?

No one is going to hold a large language model accountable. They will hold you accountable. You have to ensure that all the critical factors—security, compliance, customer commitments, regulatory requirements, company policies—are met. That’s your responsibility.

You can absolutely get help where it makes sense and where it’s responsible, but ultimately, you’re on the hook for this.

Scott King
Right. The CISOs or whoever is in charge—if there isn’t a CISO, the highest-ranking security person—has to sign off on these things. They’re personally liable for this.

John Michelsen
Yes, and globally, governments are becoming more precise about establishing individual liability, even criminal liability, within organizations. This isn’t fear-mongering. You can improve your security posture while delivering an agentic, AI-driven platform, but you have to be thoughtful about it.

In the rush to market, many platforms are missing critical security elements. We’ve been in the market for four years, so we’ve addressed these challenges, but in many cases, we see platforms that lack key security measures. When someone says, “Well, we don’t have to do that with the other product,” that’s not a good thing—that’s a deficiency. The question should be, how are you handling that risk?

Scott King
So how do you improve security? If today’s baseline is where IT teams, CISOs, or other security leaders are evaluating these platforms, how do you make it better?

John Michelsen
Let’s break it down into three areas of concern.

First, external cyber threats—the ones we focus on the most. While they get the most attention, they aren’t the primary cause of breaches. Still, you need to ensure that any AI or traditional platform follows best practices for SaaS and cloud security. There are industry standards, like SOC 2, that help vet these platforms. You need to confirm that the provider follows best practices.

Second, regulatory and compliance requirements. Even if you’re not personally concerned about certain regulations, someone else is—they wrote the rules, and you have to follow them. Standards bodies like PCI define strict requirements, and company policies—such as data residency rules in the EU—must also be considered. This isn’t just an AI discussion; it’s about ensuring that as you expand data access within your organization and integrate AI platforms, you’re still meeting all regulatory and compliance commitments.

John Michelsen
A mature platform will have mechanisms to ensure compliance and prove it. The ability to document and demonstrate compliance is crucial. If an AI-based system lacks sufficient logging, access controls, and precision, there’s no defensible audit trail. You need more than just doing the right thing—you need proof that you did.

The first two concerns—external threats and compliance—are significant, but they’re nothing compared to the biggest risk: internal data exposure. The majority of data breaches come from internal users, whether accidental or intentional.

Talk to anyone in cybersecurity, and they’ll tell you—most of the effort is focused on protecting against hackers, but most data loss happens through employees. A well-designed agentic platform can actually improve security if implemented correctly.

John Michelsen
The biggest challenge today is that people are the integration strategy. Employees need access to multiple systems and software on their desktops or even laptops at home. This creates a massive security risk—exposing sensitive information to individuals who may not need it for every task they perform.

In most large companies, teams exist just to determine whether an employee really needs access to a particular application. And 99% of the time, access is granted because, ultimately, that person does need it for a task. But we don’t track how long they need it, we don’t maintain visibility over ongoing access, and we don’t manage it well over time.

This uncontrolled exposure to systems and data is the number one way internal threats—whether accidental or malicious—become the largest security risk of all.

Scott King
That reminds me—when you automate employee workflows using platforms like Zapier, Make.com, or If This Then That, there are dozens of these tools where people automate their own jobs. But they need access to all those systems, and the right level of access.

I’ve built some of these workflows using other platforms, and I always wonder—why do I have admin access to everything?

John Michelsen
Yeah, yeah. That’s right.

Scott King
From my perspective, it’s easy to set up, but if someone is trying to automate something they don’t have access to, they’re constantly requesting elevated privileges. So how is what you’re talking about different from when people build workflows in Zapier?

John Michelsen
In some ways, it’s not different. But in other ways, it’s concerning. Even with Zapier, security teams get nervous. The typical CISO cringes at the thought—how many of my employees have connected critical systems to something I’m responsible for securing? How do I know there’s no malicious data exfiltration or injection?

By the time a company like Zapier reaches a certain level of maturity, we can assume their business model isn’t about infiltrating companies and extracting data. But what about that small AI startup offering similar capabilities for free or at an unsustainably low price?

I’ve seen people get excited about these tools, interconnecting systems without considering security. If a company isn’t charging you, or their pricing doesn’t even cover their AWS bill, you have to ask—what’s their real business model?

Scott, you and I have been around long enough to know—if you’re not paying for it, you’re not the customer. You’re the product.

Scott King
If it’s free, you’re the product. But do you think that’s really happening? Could an AI agent, even if it’s not free—say it’s only $5 a month—be fooling people into thinking it’s legitimate? If you can sign up that easily, you probably don’t know much about its security.

John Michelsen
Let’s just say you probably haven’t fully vetted its security. You haven’t carefully read the SOC report or verified that the audit was done by a reputable firm. You likely don’t know what a proper penetration test report looks like or what its high, medium, and low-severity risks mean.

To be clear, I’m not saying we have evidence that any particular AI agent company is deliberately trying to extract data. But we’ve seen this before in other industries.

When we worked in mobile security, there were weaponized applications—entire companies built ecosystems of users just to harvest data.

Scott King
Yeah, but people didn’t believe us.

John Michelsen
Right. These apps built massive followings, and then either the original developer or an acquirer weaponized them, turning them into malware. Once millions of people were using them—without paying—there had to be a reckoning. The developer needed to make money, and instead of scraping by on $5 a month in ad revenue, they took a check for $100,000 from a company and said, “Sure, here it is.” That’s when malware was injected, and every user automatically updated.

There are countless examples of this, especially in mobile security. But the risks with agent platforms are even greater. At least on mobile, we have sandboxes to limit damage. Now, we’re giving agents access to HCM, CRM, SharePoint, and all kinds of enterprise data, expecting them to follow rules we never explicitly define. “Do what I need, but don’t do anything else, okay?” That’s not a serious approach to security. If this continues, we won’t see good outcomes.

Scott King
That reminds me of a recent conversation. Chris and I talked about the explosion of AI agents a couple of episodes ago. There’s a website that categorizes them all—over 800 agents. There’s no way to vet the security of 800 different agents, right?

John Michelsen
No, and don’t try to adopt all 800 at once. Only use what you need, and be intentional about it. Security discussions can easily turn into fear-mongering, but that’s not my goal. I’m explaining what happens when you’re not careful. The good news is that if you do this responsibly, you can improve your security posture.

Yes, giving AI agents access to enterprise systems has risks. But if you work with a well-established company—one with pen test reports, a clear revenue model, and a business designed to serve customers, not exploit them—you’re on the right path. A reputable provider understands security, regulatory compliance, and data protection. That’s half the battle.

The other half is eliminating unnecessary access. If AI agents orchestrate system interactions, you don’t need employees to have direct access to all those systems. That’s where accidental or malicious data loss happens.

Take Krista as an example. Instead of giving every sales rep, partner, or third party access to the CRM, they interact with Krista through chat, voice, or other interfaces. Krista orchestrates system activity based on predefined rules, ensuring users only access what they need.

John Michelsen
I control what Krista can and can’t do, and I trust that it will operate as directed. By doing that, I’ve reduced exposure to CRM, sales data, ERP, and other sensitive systems—while still achieving a high level of automation. Instead of increasing security risks, I’ve actually reduced them.

To move forward, companies need an enterprise-grade platform that elegantly integrates AI, systems, and people. Right now, we rely too much on people as the integration layer. That’s why security breaches happen. Instead of giving employees broad system access, we should shift business processes into software and let automation handle execution.

John Michelsen
When employees have excessive access, they’re bound to stumble upon information they shouldn’t see. Sometimes it’s accidental—”Oh, that’s interesting, I’ll save that for later.” Other times, it’s intentional—”I’m quitting, and this data might be useful elsewhere.”

Scott King
Yeah, like when a sales rep takes the customer list or an IT guy downloads proprietary data.

John Michelsen
Exactly. And it happens more often than we’d like to admit. That’s why security teams exist—to prevent it. Even if a company doesn’t have a dedicated security team, someone is responsible for mitigating these risks. So instead of making it easier for these issues to occur, we should take intentional steps to make them harder. That’s what we’re talking about here.

Scott King
So, let’s say a person has system access. They read data, store it in their head, take a screenshot, or download a file. Either way, the data is leaving. What’s the difference between that and having an agent do it?

An agent still has to read the data, process it, send it somewhere, perform calculations, and then write updates back. Do people fear that the agent is using an external service and that data is leaving and returning? Or is that concern unfounded?

John Michelsen
We hear all of it. Whether the concern is valid depends on whether you’re working with software that’s well-built, secure, and free of malicious intent. I can’t speak to every system out there, but this issue comes up almost every time.

There’s a question people want to ask, but they don’t. Instead, they try to phrase it in a way that avoids directly addressing their real fear.

Scott King
So, what’s the question they want to ask but don’t? And what do they actually say?

John Michelsen
The real question is: Am I an idiot if I connect Krista to all my systems and assume you’re not just sucking everything out, leaving me with no control?

They’re thinking, You’re some cloud-based AI. If I give you access, how do I know what you’re doing? Am I a complete idiot, or just a little bit of an idiot, for even considering this?

Scott King
Yeah, because they’re responsible for security. If something goes wrong, they’re on the hook.

John Michelsen
Exactly. But instead of asking that outright, they use verbal gymnastics. What they actually ask is something like:

“If I give Krista access to a system, how does it know what it’s allowed to retrieve and what it’s not? How do I tell it what to do and what not to do? Where does the data go? What happens when I connect it? Does it just start running things automatically?”

They’re trying to get at the same fundamental concern—whether they’re handing over too much control.

Scott King
Yeah, they want to know if Krista is a super admin with all the encryption keys.

John Michelsen
Right. They’re worried Krista is building a massive data lake in the cloud, training AI models on their data, and potentially making that intelligence available to competitors.

Given some of the early concerns around generative AI, that fear isn’t entirely unfounded. But the way they ask the question often sounds like, Are you about to hook up a vacuum hose and start pulling in all of our data?

Scott King
Yeah, it’s almost like they’re asking, What I really want to know is, is Krista going to run somewhere, collect all my data, and then suddenly demand a million dollars for access?

John Michelsen
Exactly. Now that I’ve extracted every byte from your systems, I’m going to lock them down, corrupt the data, and sell it back to you.

Do I think most vendors on the market are acting with malicious intent? No. But I do struggle with the business model of those trying to develop cutting-edge AI, run LLMs, and handle the associated compute costs—all for $10 a month. There’s a reckoning coming. Either they’re burning through VC money and will eventually charge much more, or you’re not actually the customer. It’s one of the two.

Scott King
So along those lines, what about DeepSeek? It’s made a big splash recently because it’s so cheap, and people are saying it’s going to shake up the market. But it’s from China.

What’s the real concern here? When someone worries Krista might make a copy of all their data, what’s the equivalent concern with DeepSeek?

John Michelsen
There’s a fair amount of concern about DeepSeek, but not all of it is well thought out. I don’t want to say it’s uneducated, but it’s not always fully considered.

We use a lot of Chinese-made goods here in the U.S.—including the computer and microphone I’m using right now. The fact that an AI model is made in China isn’t inherently alarming.

Scott King
Right, our devices, our infrastructure—so much of it comes from China.

John Michelsen
Exactly. Now, from a geopolitical perspective, there are concerns about who will lead in AI. I’m not naive about that, and I want the U.S. to continue driving global innovation, which we’ve done for decades.

But instead of immediately distrusting any foreign innovation, collaboration can be beneficial. Open-source projects, for example, allow full transparency. You can download the code, inspect it, stand it up, and monitor network activity. If there’s anything suspicious, you’ll see it.

Look, decades ago, there were news reports suggesting foreign cars could be used for mass attacks, with hidden bombs detonated remotely. It sounds ridiculous now, but at the time, some people took it seriously.

That’s why we need to be prudent. I said earlier—vet your vendors, understand their business model, and ensure they have a sustainable path forward. If they don’t make money, they won’t be around tomorrow, or their product will change in ways you don’t expect.

I’m not saying DeepSeek itself is a threat. What it has shown is that there are more cost-effective ways to build models. And that’s not even a bad thing for chip makers—it helps maximize the value of their hardware.

People assume AI-driven efficiency means job losses, but that’s not what’s happening. My development team is 30-40% more productive thanks to AI-assisted coding. But I didn’t reduce my team—I’m getting more value out of the same people. They’re becoming more valuable, not less. And we’re still hiring, expecting even higher productivity.

The same applies across industries. Just because something becomes easier doesn’t mean it’s less valuable. If it produces real value, its impact only grows.

Scott King
That’s a good point. You mentioned developers becoming more efficient. Does AI just make them faster, or does it actually upskill them?

John Michelsen
It’s both. AI upskills developers and makes them faster, but they still need a strong foundation. I wouldn’t say they need mastery, but close to it. They have to understand what the code does, how it works, and where AI-generated code might be incorrect.

Large language models don’t “think” better than us, but they process information faster and have broader exposure to different approaches. A human expert might know five ways to solve a problem—an AI model has seen 500. If you define that breadth as a skill, then yes, AI has a skill advantage in that regard.

Humans also have biases. We tend to favor familiar methods and dismiss approaches we haven’t used before. AI doesn’t have that limitation. It presents possibilities we might never have considered.

For a junior developer working on something new, AI’s expanded awareness is incredibly useful. But the accuracy, relevance, and alignment of the generated code still need human oversight.

That’s why we encourage our team to use AI for productivity and fresh ideas. We don’t want to be siloed by our own experience. But at the end of the day, the responsibility is still on the developer. You don’t get to blame AI for introducing a bug. You’re the one committing the code.

It’s the same with security and automation. We can’t abdicate responsibility to AI agents. We can’t say, “Well, the agent did it, not me.” That’s not how it works.

To bring this full circle, agentic platforms unlock massive productivity, innovation, and efficiency—but they require structure. You can’t just say, “Go run my company” and expect good results. You need guardrails and a clear understanding of where to allow flexibility.

John Michelsen
If we get that right, the potential is incredible. This is an exciting time to be in AI, especially for people like us who have been working toward this for years. Seeing all these pieces come together is amazing.

I’ve never tried to time a market, but I feel incredibly grateful that things are unfolding this way. We’re excited to help businesses take advantage of this transformation. Unfortunately, too many companies will hesitate, watch from the sidelines, and start too late. Half the battle is learning by doing—not waiting.

Scott King
Agreed. Well, with that call coming in, thanks, John. Great conversation on security. Until next time.

John Michelsen
Thanks, Scott. Thanks, everyone.

 

 

Our 2025 AI Buyer's Guide is Now Available

Close Bitnami banner
Bitnami
Close Bitnami banner
Bitnami