What TPRM Professionals Think About AI

February 14, 2024

In Third-Party Risk Management (TPRM), adopting Artificial Intelligence (AI) presents both an opportunity and a dilemma. One, if you should use AI, and second, for what tasks. I talked with TPRM experts Sam Abadir and Tom Garrubba about responses from a recent poll among approximately 1,000 risk management professionals. We reviewed the questions and responses and offered insights and opinions based on the results.

The Main Use for AI in TPRM Programs: Insights and Future Directions

As AI explodes into every part of every organization, GRC professionals are expecting an enormous AI impact. Yet, less than half of the TPRM professionals polled in a continuing education survey state they are already using AI. When asked, “What is the main use for AI in your TPRM program?” the poll revealed a diverse range of responses regarding the use of AI in TPRM programs: 15% of respondents utilize AI for “Vendor risk tiering and identification,” 10% for “Pre-contract due diligence,” 20% for “Ongoing monitoring,” and a notable 53% indicated they do not use AI for TPRM at all. If AI is do hot, why aren’t more using it for risk management?

Sam identifies several factors influencing the hesitant adoption of AI in TPRM. He points out the complexity and novelty of AI as significant hurdles. Many risk professionals have yet to grasp AI’s potential beyond its more sensationalized achievements and underestimate its capability to streamline repetitive tasks. The perception of AI as a tool reserved for groundbreaking innovations has overshadowed its practical applications in day-to-day operations, delaying its integration into TPRM processes. Furthermore, awareness and governance issues, including concerns over privacy and a lack of familiarity with AI solutions tailored to TPRM needs, contribute to the reluctance. However, Sam remains optimistic about the future, predicting a shift towards more widespread AI adoption in TPRM as its benefits become more widely recognized.

Tom highlights fear and uncertainty as significant barriers to AI adoption in TPRM. Organizations are cautious and need to learn how to deploy AI effectively while managing data privacy and compliance with regulatory standards. This caution is particularly pronounced in regulated industries, leading to a technological conservatism where companies are reluctant to be early adopters of cutting-edge technologies. Tom also discusses the middle ground approach, where organizations aim to balance the use of AI without fully committing to its most advanced capabilities, mainly due to uncertainties about deployment and data management.

Addressing TPRM Program Needs with AI: Automated Risk Assessments

Collecting, populating, and gaining insights and risk from supplier data helps organizations prioritize findings, risks, and issues. However, this process has been crippled for years, with companies only viewing a portion of their risk profile. Imagine only seeing half of your favorite portraits or movies; you need a full view and need to include important pieces of the story. Therefore, it is no surprise that the most popular response to “When it comes to using AI in third-party risk and compliance, what does your program need most?” is “Automated Risk Assessments” (53%). Automated risk assessments was by far the most popular answer, followed by 24% seeking “Predictive Analytics,” 10% aiming for “Cost Reduction,” and 13% desiring “Customized Risk Scoring.”

Automated Risk Assessments: A Critical Need

The demand for automated risk assessments highlights the growing complexity and volume of third-party relationships that organizations must manage. This complexity, coupled with the need for efficiency and accuracy, makes automation desirable and essential. Sam and Tom emphasize the role of AI in enhancing risk management capabilities without necessarily increasing headcount. Both highlight a common challenge within TPRM programs: insufficient resources and tools to effectively address the expanding scope of third-party risk. With resources becoming harder to find and retain, companies need to increase automation and AI to help manage the growing complexity.

Strategic Vision and Proactive Management

Sam and Tom stress having a strategic vision and plan for AI in TPRM, advocating for a proactive rather than reactive stance. They argue that AI should be seen as a decision-support tool that complements human judgment rather than replacing it. This approach requires leveraging AI for efficiency gains and retaining the necessary human oversight to interpret AI-generated insights meaningfully. The future of your program shouldn’t be constrained on finding the right people to generate insights but to generate automated insights for the people you have.

Streamlining TPRM with AI: Addressing Time-Consuming Tasks

Looking deeper into the opportunities AI brings to TPRM, it is essential to streamline or remove the most labor-intensive workflows. The survey results identify vital areas ripe for innovation through automation AI. A significant 43% of surveyed professionals acknowledge that “Completing and validating risk assessments” is the most promising domain for AI’s efficiency boost. Furthermore, 12% perceive AI’s capabilities extending to “Creating, logging, and following up with findings,” orchestrating a more seamless follow-through of the TPRM process. Additionally, the same percentage of respondents advocate for AI’s role in “Reviewing third-party contracts against internal policies and controls,” alongside expediting the provision of “Quick answers to management’s risk-related questions.” Encouragingly, 13% of risk management professionals recognize AI’s broad-spectrum advantage, endorsing its application across all mentioned TPRM tasks.

 

Sam suggests that as TPRM professionals become more familiar with AI capabilities, they could delegate numerous spreadsheet-based tasks to AI. This transition, however, requires a thoughtful approach to ensure that AI tools supplement human decision-making without overstepping. He stresses the importance of AI as a decision support tool, providing risk managers with prepared information to make informed decisions rather than making those decisions autonomously.

Tom goes on further to raise concerns about potential Catch-22 situations where the efficiency gains from AI could paradoxically make it harder to justify additional staffing within TPRM teams. This underscores the need for a strategic vision that encompasses the immediate benefits of AI and its long-term impact on the TPRM function. Tom and Sam agree there is a balance between leveraging AI for efficiency and maintaining the human oversight necessary for nuanced risk management.

Navigating the Perceived Risks of AI in Third-Party Risk Management

As AI integrates into TPRM practices, it is met with enthusiasm and caution. The last question in the poll highlights the perceived risks associated with AI adoption, revealing that 39% of respondents view “data privacy” as the primary concern. This apprehension is followed by worries about “ethical risks” (20%), “lack of AI transparency and explainability” (16%), challenges with “in-house resources to maintain and update AI models” (11%), and “ease of implementation/user experience” (11%). These concerns offer a glimpse into the collective mindset of TPRM professionals as they grapple with the potential and pitfalls of AI.

Insights on Data Privacy and Ethical Risks

Tom articulates the predominant fear surrounding data privacy, emphasizing the dangers of inadvertently exposing sensitive information. The risk of leaking personally identifiable information (PII), client details, or even intellectual property could devastate an organization’s reputation and legal standing. Tom advocates for a tailored approach to AI implementation, suggesting that limiting AI’s scope to non-sensitive data could mitigate privacy concerns. This strategy aims to ensure that AI tools are used in areas where they can provide value without risking data breaches.

Sam echoes Tom’s views and extends the discussion to intellectual property. The potential loss of proprietary information through AI platforms is a significant threat, underscoring the need for stringent data governance and secure AI deployment. Both experts stress the importance of establishing clear boundaries and protocols for AI use within organizations to protect sensitive data and maintain trust.

Strategic and Secure AI Deployments in TPRM

AI and automation will improve and enhance most business functions and workflows, including data-heavy risk management. Using AI in TPRM can yield significant efficiency gains, cost savings, and risk mitigation. However, deploying AI strategically while addressing potential risks and maintaining data privacy and ethical standards is crucial. Here are a few high-level steps to consider:

  • Robust Governance Frameworks: Developing comprehensive policies and procedures that govern AI use, ensuring alignment with privacy laws and ethical standards.
  • Selective AI Application: Identifying specific areas within TPRM where AI can add value without compromising sensitive data or ethical principles.
  • Continuous Education and Training: Equipping TPRM professionals with the knowledge and skills to manage AI tools effectively, ensuring they understand both the capabilities and limitations of AI in risk management.
  • Stakeholder Engagement: Involving key stakeholders from across the organization to foster a collective understanding of AI’s role in TPRM and address any concerns proactively.

To assess your current TPRM processes and identify areas that could benefit from AI, you can contact us at Krista or connect with Sam and Tom for a conversation.

Links and Resources

Speakers

Scott King

Scott King

Chief Marketer @ Krista

Sam Abadir

VP Solutions @ Krista

Tom Garruba

TPRM Expert

Guide:  How to Integrate Generative AI

Close Bitnami banner
Bitnami