top of page

Infosys and MIT Technology Review Insights Report Reveals the Critical Role of Psychological Safety in Driving AI Initiatives — with 83% of Business Leaders Reporting a Measurable Impact

  • Dec 22, 2025
  • 4 min read

AI doesn’t stall because models are weak; it stalls because people feel exposed. That is the central message from a new global report by Infosys and MIT Technology Review Insights: 83% of leaders say psychological safety measurably affects AI success, and 84% link it to business outcomes. The takeaway is as practical as it is cultural—your governance, tooling, and investment only compound when employees feel safe to test, question, and learn in public.


Laurel Ruma, Global Editorial Director at MIT Technology Review Insights, puts a fine point on it: “Psychological safety is not a soft metric; it’s a measurable driver of AI outcomes.” The report’s data shows why. Almost a quarter of leaders have hesitated to propose or lead an AI project for fear of failure or criticism. Only 39% rate their organization’s psychological safety as “high,” while nearly half describe it as merely “moderate.” In other words, many companies are attempting to scale AI on cultural foundations that shift under pressure.


Below, we translate the findings into an executive playbook your teams can use this quarter.


1) Treat psychological safety as an operating condition for AI, not a perk

Technology capability is often present; participation is not. Fear of failure, unclear expectations, and uncertainty about job impact suppress the experimentation that AI demands. When leaders normalize inquiry (“ask the obvious question”), dissent (“pressure-test the assumption”), and small, reversible failures, adoption accelerates.


Move now:

  • Name the risk posture. Publish a plain-language policy that describes acceptable use, data boundaries, review gates, and when to escalate.

  • Codify “safe-to-fail.” Establish micro-pilots with fast rollback paths. Reward learning captured—not just wins delivered.

  • De-stigmatize red flags. Make model limitations, bias risks, and failure cases part of routine show-and-tell, not a post-mortem curiosity.

Signal you’re serious: Ask leaders to share one experiment that didn’t work and what changed because of it. When executives model curiosity over perfection, teams match the behavior.


2) Communicate job impact with radical clarity

Sixty percent of respondents say the single biggest lift to psychological safety would be clear messaging about how AI will—and will not—affect jobs. Ambiguity breeds rumor; rumor kills momentum.


Move now:

  • Publish a living “AI & Work” FAQ. Address task redesign vs. role elimination, reskilling timelines, evaluation standards, and how human judgment fits into final decisions.

  • Define “approved use cases.” Spotlight the highest-value, lowest-risk tasks first (e.g., summarization, knowledge retrieval, drafting with review), along with owner, data source, and acceptance criteria.

  • Close the loop. Share adoption metrics and decision rationales in monthly town halls. If a pilot pauses or pivots, explain why.

Message discipline matters: Use consistent language across executive memos, product notes, and enablement guides. Inconsistency signals indecision.


3) Build the human infrastructure: enablement, forums, and feedback loops

Psychological safety is experienced locally on a team, with a manager, in a meeting. The right scaffolding turns intent into habit.


Move now:

  • Enablement tracks by persona. Give data scientists, product owners, frontline managers, and communicators role-specific guides: prompts, quality checks, compliance guardrails, and “definition of done.”

  • Create open forums. Host recurring “AI Office Hours” where anyone can demo a workflow, ask “naïve” questions, or request a review. Rotate senior leaders through as listeners first.

  • Instrument the culture. Pair adoption analytics (usage, experiment velocity, time-to-decision) with sentiment signals (pulse checks on safety to speak up, clarity of impact, confidence to escalate issues).

From pilots to scale: When teams can see what “good” looks like and how it’s evaluated they contribute more and copy success faster.


4) Govern for trust: transparent decisions, clear accountability

AI governance often reads like a compliance ledger. Effective governance reads like a contract for trust: here’s how we make decisions, who owns the risks, and how trade-offs are surfaced.


Move now:

  • Decision logs everyone can read. For material use cases, capture purpose, data sources, model choice, known limitations, human-in-the-loop steps, and outcomes.

  • Red-team rituals. Before scale, put use cases through adversarial review that includes domain experts, legal/ethics, and employee representatives.

  • Escalation that works. Publish a one click path for raising concerns, with guaranteed response SLAs and non-retaliation language reinforced by HR.

Why this helps: Transparency reduces speculation. Clarity on roles reduces anxiety. Together they unlock participation the essence of psychological safety.

Call-to-Insight

Script the safety. If you want AI to land, make psychological safety explicit in policy, language, meeting habits, and leader behavior. Then measure it and manage it like any other performance variable.


Final reflection

AI transformation is a culture project wearing a technology badge. The Infosys–MIT Technology Review Insights study confirms what many leaders have sensed: adoption hinges less on model sophistication than on whether people feel protected while they learn. In practice, that means speaking plainly about job impact, rewarding reversible experiments, and governing with transparency. Do those things consistently, and AI becomes less of a leap and more of a ladder—one your people will choose to climb.


About AdvoCast

Advocast is a strategic communications consultancy grounded in the Human Impact Blueprint™—a framework for building trust, alignment, and authentic connection inside organizations. We help leaders operationalize clear communication, resilient culture, and responsible technology adoption so people and performance move together.

Interested in turning these principles into a repeatable playbook for your teams? Explore our resources on trust-centered change, leadership communication, and responsible AI.

Let’s
Connect

  • LinkedIn
bottom of page