AI Fear and the Fear of Missing Out

November 29, 2023

Embracing technological change via automation and artificial intelligence (AI) is no longer optional; it’s a necessity. Delaying AI use in your company can hinder progress and put you farther behind your competitors. However, embracing AI adoption is not without its apprehensions. Your concerns about unknown outcomes and hallucinations are valid but are easily overcome with the right security, accuracy, performance, and cost strategies to limit your risk and exposure. Integrating AI is about continual progress over perfection, focusing on the transformative power of automated processes, rather than the pursuit of unattainable perfection. We will show you how to overcome AI fear, build confidence, choose the right process for AI and guide you toward the first steps for adopting AI.

Embracing Change

Adapting to change, specifically, the integration of AI and automation technologies, is an inevitable step in modernizing operational processes. It’s understandable that humans, by nature, exhibit caution and resistance towards such shifts – a manifestation built into our DNA. But it’s essential to recognize that this initial resistance, while a natural human response, must be overcome for the sake of progress. Businesses that embrace this change sooner rather than later stand at an advantage, as they are less likely to be surpassed by competitors who are quick to adopt AI.

Overcoming AI Fear and Building Confidence

Overcoming the inherent fear of AI and building confidence in automated processes is a crucial step to remain competitive. As Chris points out, being among the first to adopt AI can offer an advantageous position in a technologically driven market. Realize that AI is not about replacing human effort; rather, it’s about augmenting processes to reduce human error and bolster quality assurance and higher quality builds confidence and trust. However, trust isn’t built overnight. By automating tasks and including a human-in-the-loop oversight system, we create a safety net that ensures the integration of AI is gradual and within control. Observing AI performance in real-time allows us to evaluate its decision-making abilities, further fortifying our confidence. Essentially, the journey toward AI adoption is about embracing change without surrendering control, thereby mitigating fear and fostering trust in the process.

Choosing the Right Process for AI Adoption

Choosing the right process for automation and AI is straightforward. To begin with, consider the volume of tasks involved in a certain process. High-volume tasks, like handling and responding to inbound emails reduces the workload on your staff and delivers immediate cost savings. Next, evaluate the magnitude of human error within the process. AI can aid processes that contain repetitive tasks prone to human error or under training by increasing accuracy. It’s also important to assess whether the process is internal or external. An internal process, like HR inquiries, is an excellent place to start, especially in large corporations where repetitive questions are common. Building an employee assistant using generative AI is simple given you properly secure and govern the data. Transitioning an internal process first can help build confidence within the organization before tackling external ones. However, Chris advises tailoring your approach to your company’s culture and risk tolerance. If your company prioritizes ROI, starting with a larger, more impactful process may be the way forward, while a more cautious culture might benefit from starting small and scaling up. The key is to align AI adoption with your company’s specific needs, growth strategy, and IT capabilities.

First Steps Towards AI Adoption

The initial steps towards adopting AI involve identifying and prioritizing areas within an organization that could significantly benefit from automation and AI. Rather than embarking on a lengthy and resource-intensive study, Chris recommends a straightforward approach:

  1. Write down five areas to examine. These areas could be processes that are high in volume, prone to human error or both.
  2. After listing these areas, each should be ranked based on its priority. This priority could depend on the volume of work the process involves and the potential risk of errors. For instance, AI could effectively be deployed to manage high-volume tasks like answering emails or making decisions in a warehouse, which not only eases the workload but also enhances accuracy by minimizing human error.

This simplified and focused approach facilitates a smooth transition to AI, enabling organizations to embrace the transformative power of automation without being overwhelmed by complexity.

Embracing Progress Over Perfection

Embracing AI ultimately signifies an acceptance of continual progress rather than the pursuit of unattainable perfection. The real purpose of the first AI project in any organization should be to alleviate a significant pain point, thereby making a tangible impact on efficiency and productivity. While the journey towards AI integration is paved with uncertainties and risks, managing these risks strategically is a fundamental part of the process. AI will potentially transform not only business operations but also everyday processes, significantly reducing the incidence of human error. I encourage those seeking to leverage AI and automation to contact us to get started. Let’s embark on this journey together and build trust in AI.

Links and Resources

Speakers

Scott King

Scott King

Chief Marketer @ Krista

Chris Kraus

VP Product @ Krista

Transcription

Scott King: Well, hi there, everyone. This is Scott King, and thanks for joining me and Chris for this next episode of The Union podcast. Today, we’re going to talk about fear and fear of the unknown. People are talking to us about moving toward automated processes. They want to try AI, but they’re scared.

Why are they scared? Is this unfounded, or should they be scared?

Chris Kraus: Anytime we’ve had a shift in technology, there’s been fear about process change. How does it affect my job? How does it affect how I do things? With the first automated data processing computer, we could send out a water bill or electric bill versus people typing them. That was probably not a lot of fear. But no one was thinking the computer was going to take away my job. They were amazed it calculated the water bill. But today, especially with artificial intelligence and process automation, there’s a lot of fear. People wonder, is this going to take away my job? Every technology prompts that question, and it doesn’t. Ideally, it’s going to help enrich your job and make your life better, so you can accomplish more important things versus the menial tasks.

Scott King: No technology eliminated jobs; it just made those jobs better, just like AI is going to do. I use generative AI for all types of things. It’s never 100% right. I still have to edit, there’s judgment error, and manipulation is needed.

But it does get me to a certain point faster than before. I’ve talked about using it to clean up the transcriptions of the podcast and to take out our comfort words. It does go a lot faster, but there still is that human judgment element. We’re not going to send our lives and our entire jobs to the computers.

That’s what people are fearing. They’re thinking, “I’m going to have AI do 100% of what I need to do,” and that’s just not the case.

Chris Kraus: AI by itself does something, but it’s about applying it to an outcome or a multi-step process. How do we add automation of human tasks and computer tasks together to come to the bigger picture? Here’s a great example. I do lots of POCs with customers, and they claim their emails and customer support are unique. I tell them they’re not.

They give me 20,000 emails, and I run them through Krista. It takes me an hour. I say, “Hey Krista, please process these.” We take 20% and set them aside for testing. We train the email model with the 80%. Then we assess how well we are predicting. People are afraid we won’t get the answer right and get hung up on needing 100% accuracy.

What I do is show them the 20% we tested. We often achieve 90% confidence. When we predicted a different answer than what a human said, it was usually a human error. Humans make mistakes. So, is it the end of the world if we read an email and categorize it as an order status or a shipping status?

With AI, we get good at that. We automate the process of looking up the status from the right system and emailing back. I show people that the data they provided had human errors. Maybe due to fatigue, multitasking, or working in a non-primary language. Errors happen, but it’s not shutting down your business today.

If AI doesn’t achieve 100% accuracy today, it’s not going to shut down your business either. We may do better with AI because there are things we’re very good at. We’ll identify an order status or shipping status, look it up, reply back in an email, and close it. It’s not perfect today, so don’t think you have to be perfect to apply AI to it. What you’re doing today has flaws too.

Scott King: There are all types of errors because we’ve shoved all this process knowledge onto people and expect them to run multi-step outcomes for a customer. That’s not 100% reliable. People get tired, they forget a rule. For example, I went to upgrade a phone at T-Mobile, and they offered me a free phone. I was skeptical, but they assured me I wouldn’t pay for it. A month later, I found an $83 charge on my bill for the phone. It was a human error at the store; they mistakenly quoted the sale or upgrade process. I wasn’t eligible, and resolving it was a nightmare. It ended up costing me a significant amount. This kind of thing happens. Can you imagine a business leader expecting their people to be 100% right, then moving to more automation and AI?

Chris Kraus: You would need three people to audit and check every decision. It could be ridiculous.

Scott King: You would never get there. You would go out of business. Someone else moving more processes to an automated platform and more toward AI will outpace you and take your customers.

Chris Kraus: We looked at an article from Quality Gurus. They said humans are usually not the problem. Blaming human error means you didn’t do good root-cause analysis. The root cause is sometimes wrong data, lack of training, or absence of standard operating procedures. You can’t blame human error if you didn’t set up for success. The funny part is, they suggest better training or adding automation to help. Think about it. If you have automation, it can assist in these situations.

If you train workflow in software, it will perform the same way every time. Computers are very good at multitasking and following rules. Some of the things suggested to mitigate problems in processes can be done with software, which excels at these tasks. It’s not usually human error; there’s a root cause beneath it. These issues can be solved with technology. Errors happen, but that doesn’t mean we shouldn’t strive to do better.

Scott King: Technology, especially expensive cybersecurity software, is used to alleviate human error. 73% of all breaches are caused by human error, like clicking on malicious links. But from the process automation standpoint, it’s a no-brainer. It’s about being comfortable with change, which is human nature. We’re risk-averse by nature, always alert for potential dangers. But adoption of technology will happen eventually; we just have to get over the initial resistance.

Chris Kraus: There are many books on the first mover advantage. If you’re the first to adopt AI to augment your processes, you’ll see benefits. Automating more processes means fewer human mistakes. Beyond AI making predictions, the machine’s ability in process automation will help reduce your overall error level. This is essentially increasing the level of quality assurance everywhere. Don’t fall into the fear of needing perfection to start. Perfection is the enemy of progress. Sometimes we have to get things done.

Scott King: Perfect is often said to be the enemy of good. We’ll have to look that up; we didn’t prepare for that.

Chris Kraus: The point is, don’t let your competitors surpass you by adopting AI. We have ways in AI to indicate our confidence level. Sometimes it may be more accurate than human judgment. There’s also the added value of automating the process and outcome, which helps remove other types of human errors.

Scott King: By automating more things, you can create a gate. If you don’t completely trust the process, you can include a human-in-the-loop step for oversight. As you observe its performance and see it helps in decision-making, you’ll gain confidence. It’s not making the decision; it’s assisting with a high level of confidence.

Eventually, you might find the machine is consistently accurate, and you may decide to let it take over more. Then you move on to the next step. It’s not about turning everything over at once. People fear a black box scenario where everything goes haywire, but it doesn’t work like that.

As a next step, Chris, if I wanted to manage my risk and watch the process, what type of process is good to start with? People ask us about a private GPT-type experience for human resources or FAQs. Is that a good place to start, or is the email example you talked about better?

Chris Kraus: You could start with either. Look at volumes. If you get a thousand emails a week or 10,000 emails a week, taking some of that burden away and answering those can lead to significant cost savings, whether in customer satisfaction or reducing the workload on staff. If you have only a few HR questions a week, generative AI might not be as helpful. But with a large workforce, especially with high churn, where you get repetitive questions, it makes a lot of sense. There are ways to give confidence in the answers and ensure security. It’s a volume game. Decide whether your bigger pain point is internal-facing or external-facing. Once you’re successful with one, you’ll gain confidence within the organization. You might start with something lower risk and lower volume to get internal proof, then move to bigger projects. Or, if your company is focused on ROI, start with the bigger one first and then work backward. Your company culture will guide which approach to take.

Scott King: It’ll be obvious. You wouldn’t jump into a self-driving car without sitting behind the wheel and ensuring it’s doing everything correctly. It’s the same with AI. You wouldn’t just get in the back seat, especially since there’s a risk of physical harm. But if an AI answers an email incorrectly, it’s not as significant.

To sum it up, Chris, your first AI project will likely be obvious because it addresses a burden, like too many emails or a frequently breaking process. How do you escalate that? What does someone need to do besides research to become comfortable and manage their risk?

Chris Kraus: First, you don’t need a 12-month study. Write down five areas to look at. Consider their volumes and the risk of errors, as human error is always a factor. Then, rank them in order of priority. You don’t need months and consultants to identify your shortlist. These are things you do every day. Then, decide where to apply AI, whether it’s answering emails, making decisions in a warehouse, or responding to questions about products and recalls.

Scott King: Perfect. Thanks, Chris. This was a good short episode. Hopefully, we can become more comfortable with our comfort words and maybe have AI help with that, like removing them from our speech. I don’t believe I said mine today. Thanks again, Chris.

Guide:  How to Integrate Generative AI

Close Bitnami banner
Bitnami
Close Bitnami banner
Bitnami