Fieldnotes by Catalyst magazine

Miscalibrated trust is AI's hidden risk | Fieldnotes by Catalyst

Written by Tuesday Hagiwara | Mar 23, 2026 7:28:17 PM

When personal computers launched, designers created a desktop interface. Why?

Gale Lucas, a leading voice in human-centered computing and research associate professor at the University of Southern California, says it’s because the tangible interface allowed users to think of it as a tool.

Without it, users tended to think it was flawless or magical.

Now, we’re making the same mistake with AI.

Poor data, unclear problem statements, and data silos aren’t the only challenges plaguing enterprise AI.

The greater, hidden risk is: miscalibrated trust – when users place either too much or too little confidence in systems whose limits they do not understand.

This miscalibrated trust is costing organizations their strategic advantage.

According to MIT Media Lab, 95% of generative AI pilots fail and the hope for agentic initiatives isn’t that much brighter. Gartner predicts that over 40% of agentic AI projects will be canceled by the end of 2027.

When trust goes wrong

When people over-trust AI, they bypass guardrails and accept results without verification, eventually eroding their own critical thinking. Meanwhile, those who under-trust fail AI to adopt the tools or use the tool but don’t experiment deeply, forfeiting any benefits or efficiencies.

In both cases, organizations are leaving out the “human-in-the-loop," which means leaders aren’t receiving feedback they need to improve their technology. This quietly undermines adoption rates, hinders model development, and increases reputational risks. Agentic AI escalates these problems by putting a tool in charge that can change systems at scale without accountability.

The cognitive and cultural trust gap

Lucas says that trust breaks down cognitively and culturally: How people understand AI systems and how organizational culture supports their ability to assess and shape the technology.

Over-trust of AI can be partially explained by how AI is visually presented, often in conversational interfaces or even magic wands. AI agents are frequently described as: ‘making decisions in real time,’ ‘understanding the broader purpose,’ or ‘solving problems autonomously.’ This language invites employees to treat AI like a junior analyst, without the necessary oversight.

These issues are compounded when AI systems perform correctly most of the time. People become complacent and give the tool undue confidence.

A survey from KPMG found that 52% of employees admit to rarely engaging with AI critically at work. This includes verifying or evaluating the output, reflecting on whether they are using the tool appropriately, or thinking about the ethical implications.

Lucas says that users must have the right expectations for the system’s capabilities and be equipped with knowledge around the bias risks, data limitations, and appropriate use cases. For example, it’s helpful to understand that agentic AI uses probability to execute multiple steps based on pre-defined goals, limits, and data. Importantly, because the real world cannot be fully articulated prior to deployment, users must understand that AI agents must be deployed first, then improved through their use and feedback.

She encourages users to think critically about the tools they are using, “Everyone's so focused on when we need to trust it, trust it, trust it. I'm like, but in the right way, to the right extent, for the right things. Because just pushing everyone forward in terms of trust is going to create other problems.” Instead, she wants people to ask themselves: how does the tool work, what data is it using?

Without this vigilance, research shows that over-trust in AI can reshape judgement itself. Recent studies published by the Computers in Human Behavior and Sci Rep journals show that users become less aware of their real skill levels and if there is bias, they absorb it, then proliferate it.

The culture of an organization can help to re-calibrate users’ trust. Lucas says creating a healthy workplace social structure enables employees to guide each other through change, and create their own norms for how to trust, appropriately use, and validate results.

In psychologically safe cultures, leaders empower employees to maintain their agency, think critically, and provide feedback without career risk. For agentic AI, this means leaders must define where its autonomy starts and ends, when humans are in the loop, and how it will be monitored before it is granted system access.

Where there is low trust, employees quietly resist, which stalls implementation and diminishes productivity gains. To understand if this is happening, Lucas says leaders must go beyond traditional implementation metrics. “It’s not just, is it saving us money? Is it saving us time? But are people comfortable with it? Do they like it? Is it challenging? Or are there barriers created by the technology? Is it usable? Is it useful?”

Three elements for building calibrated trust

Companies that want to maximize AI’s potential will not be those that invest the most in the technology—but those that invest in calibrating trust. Speeding to implementation can’t overshadow the ability for employees to understand when to rely on AI, when to challenge it, and who’s accountable. This clarity is essential to de-risking the organization and ensuring AI investments deliver on their promise.

Trust calibration begins at the top. Lucas emphasizes these three principles that leaders across the organization can enforce, from the CEO to team leader.

  1. Assess and communicate limitations.
    Before deployment: Assess performance, bias, and identify common errors or limitations.
    At launch: Ensure employees understand the tool's use cases, limitations, bias, and common errors.
    Ongoing: Continuously assess the tool and update employees on changes with releases. Keep a log of edge use cases and errors.

  2. Create feedback loops.
    Before deployment: Identify an AI champion to act as a resource and ambassador for each team. Create pathways for escalations and empower champions to red flag major issues.
    At launch and ongoing: Gather feedback on usability, workarounds, and sentiment. Hold cross-functional meetings to address findings.

  3. Encourage continuous learning.
    At launch and ongoing: Encourage transparency and critical thinking when using AI tools. Build learning programs focused on AI literacy, specifically understanding how the tools work and what types of errors to look for.

This approach allows employees to preserve their critical thinking skills. It empowers even the most junior employee to say, ‘I think AI got this wrong,’ and for a senior leader to listen. In a world where AI agents may soon outnumber humans, humans who maintain the capacity to use AI critically are the last line of defense and your strongest competitive advantage.

Dive deeper