Your Leaders Are Using AI to Run Their Teams.
No One Is Helping Them Lead Through What It’s Doing to People.
There is a quiet crisis happening inside fast-moving companies right now, and most of them are misdiagnosing it.
They see AI adoption numbers climbing. Productivity metrics improving. Their teams moving faster, writing more, building more, shipping more. And they interpret all of that as evidence that the transition is going well.
What they are not seeing: the leaders.
Specifically, what is happening in the nervous systems, the decision-making, and the relational dynamics of the managers and senior leaders who are simultaneously being asked to use AI to augment their own work while leading teams whose entire jobs are changing because of it.
That is a genuinely new leadership challenge. And almost no one is treating it as such.
I want to say something upfront that I rarely say in professional writing: I am living this question too. I am in a transition myself right now — closing one chapter of my work, building toward something new, holding real uncertainty about what comes next while still showing up as a resource for the leaders I coach. I am not writing this from a position of having figured it out. I am writing it from the middle of it.
And I think that matters. Because the leaders I am most concerned about are also in the middle of it. And nobody is acknowledging that.
The Data and What It Is Hiding
The research in 2025 and 2026 is striking in its optimism. Worker access to AI rose by 50% in 2025. Deloitte's 2026 State of AI report found that twice as many leaders as last year are reporting transformative impact. PwC found that workers with advanced AI skills earn 56% more than peers in the same roles without those skills.
These numbers are real. They are also incomplete.
Gartner found that only one in 50 AI investments delivers transformational value, and only one in five delivers any measurable return on investment. McKinsey identified that the biggest barrier to scaling AI is not employees — who are largely ready — but leaders, who are not steering fast enough.
BCG's 2025 Global AI at Work study surfaced something more uncomfortable: managers and senior leaders are 43% more likely to worry about losing their jobs in the next ten years than frontline employees are. In other words, the people being asked to model confidence and clarity about AI-driven change are privately some of the most anxious people in the building.
We are asking leaders to be guides through a territory they are also lost in.
That is not a training problem. That is a leadership development problem of an entirely different order.
The Two Transformations Nobody Is Naming
Here is what I have observed, working inside companies scaling fast with AI embedded into their operations:
Leaders are managing two transformations at the same time.
•The first is internal. They are adopting AI tools in their own work — using it to write, to analyze, to synthesize, to accelerate. This requires a shift in how they think about their expertise, their value, what it means to produce good work. For many leaders, especially those who built their credibility on technical depth or domain mastery, this is quietly destabilizing. The identity question underneath the tool question is: if AI can do what I was known for, what am I now?
•The second is external. They are leading teams whose roles are changing. Workflows are being redesigned. Some tasks are disappearing. New ones are emerging that nobody fully understands yet. People are anxious, even when they are not saying so. And their leaders — who are in the middle of their own internal reckoning — are being asked to be steady, clear, and directional for everyone else.
This is the challenge no AI readiness program has adequately addressed.
Most organizations have responded to AI transformation by investing in tool adoption and upskilling. That matters. But BCG's research is clear: the value of AI does not live primarily in the algorithms. It lives in how organizations empower their people to use them. And the people most responsible for that empowerment — managers and senior leaders — are the least supported.
On Grief, and What It Has to Do With Any of This
Several years ago I built something from nothing. A small nonprofit, a team of eight, programs that reached thousands of young people. Work that mattered in a moment when it was genuinely needed. And then the conditions changed — forces entirely outside our control — and we had to shut it down. Not because the work had failed. Because the environment made it impossible to continue.
I know what it feels like to watch something you built — something you were good at, something that gave your work meaning — disappear. Not through your own failure but through forces entirely outside your control.
That is what I see in the faces of leaders whose roles are being redesigned around AI. The grief is real. The disorientation is real. The pressure to not show any of it — to perform adaptation and enthusiasm while privately wondering what your value is now — is one of the loneliest professional experiences a person can have.
We do not talk about this in leadership development. We talk about change management frameworks and adoption curves. We build training programs about growth mindset. We measure readiness.
We rarely make space for the very human experience of losing something — a skill set, an identity, a version of yourself that was competent and recognized — before the new thing has arrived to replace it.
That space matters. And leaders who have not had it themselves cannot create it for their teams.
What Actually Shifts When AI Enters a Team
The conventional framing treats AI adoption as primarily a skills question. Can my people use the tools? Can they prompt effectively? Do they understand the outputs?
Those are real questions. They are not the hard ones.
I have sat with leaders in biofeedback sessions — I am a Licensed HeartMath Corporate Trainer, so I sometimes work with physiological data alongside behavioral assessment — and watched what happens in someone's nervous system when you ask them, honestly, how they are doing with their team's AI transition. Heart rate variability drops. Cognitive access narrows. The body tells the truth before the words do.
What I observe consistently inside teams navigating real AI change:
•Decision-making authority becomes ambiguous. When an AI system produces output that is faster and often more comprehensive than what a team member would have produced, the implicit question becomes: whose judgment do we trust? That friction does not show up in adoption metrics. It shows up in team dynamics — in people deferring when they should push back, or resisting when they should engage. Leaders who have not worked out their own philosophy about when to trust AI outputs and when to override them cannot model that clarity for anyone else. Ambiguity at the top creates anxiety at every level below it.
•The performance standard shifts without the definition shifting. When AI accelerates output, the baseline for good work changes faster than the criteria for evaluating it. I have watched entire teams work harder and produce more and feel less recognized — because the bar moved without anyone saying so. That is a leadership conversation that requires directness and a willingness to redefine expectations out loud. Most leaders are avoiding it because they are still working out what they think themselves.
•Interpersonal trust gets recalibrated. AI changes who does what, which changes how people understand their value and their relationships to each other. Teams that were cohesive before an AI workflow redesign can develop quiet fault lines that have nothing to do with technical adoption and everything to do with unspoken questions about fairness, recognition, and belonging. Leaders focused only on adoption rates miss this entirely.
Left unaddressed, all three of these dynamics will limit the actual value an organization can extract from any AI investment. Not because the technology failed. Because the humans around it were not supported.
What the Leaders Who Get This Right Actually Do
The good news is that it is not a mystery.
After years working with leaders navigating high-stakes change — not just AI, but mergers, restructurings, cultural pivots, the particular grief of watching something they built get taken apart and put back together differently — I have a clear sense of what creates psychological safety and forward momentum in the middle of genuine uncertainty.
It is not cheerleading. It is not false confidence.
The leaders whose teams navigate change most effectively do something simpler and harder: they name what is true.
They say: I am still learning this too. Here is what I know. Here is what I do not. Here is how we are going to make decisions together while we figure this out.
That kind of transparency — grounded, direct, not performative — is what keeps teams functional when the environment is unstable. Amy Edmondson's research on psychological safety at Harvard has established this for twenty years. What AI has done is raise the stakes for it dramatically.
Beyond transparency, the specific things I watch for:
•Can they hold a question without forcing an answer? The leaders most effective in AI-driven change can stay in genuine uncertainty without collapsing into false certainty or visible anxiety. They can say "we don't know yet" and still provide enough direction for people to move. That is a capacity that has to be developed. It does not come automatically from intelligence or experience.
•Can they redefine what excellent looks like? When AI takes on parts of a role, what does excellent performance look like for the human doing the rest? The leaders who answer that question specifically and collaboratively — rather than leaving it vague — create the kind of clarity that dramatically reduces team anxiety.
•Can they stay in the room when the conversation is uncomfortable? Some of the most important leadership conversations in an AI transition are about what someone's role means now, what it might mean in a year, whether there is a path for them as the organization evolves. Those conversations require leaders to stay present and genuinely human. Not scripted. Not managed. Present.
•Emotional capacity — the structural ability to remain functional under relational pressure without requiring the environment to stay stable first — is what makes all of this possible. It was the most important leadership variable before AI arrived. It is more important now.
What This Means for How We Develop Leaders
I want to be direct about what this requires from organizations.
The standard L&D response to AI transformation has been to build AI literacy programs. Most of them focus on tool usage, ethical guardrails, productivity frameworks. A few of the better ones address mindset and change readiness.
Almost none of them address what happens in the room when a manager has to tell a high performer that half of what they used to do is now being done by a machine — and then figure out together what comes next.
That conversation is not a training module. It is a leadership moment. And leaders who have not developed the capacity to hold it will avoid it, delay it, or manage it in ways that slowly damage trust.
What effective leadership development for this moment actually looks like:
•Behavioral design, not informational design. Programs that ask leaders to practice the hard conversations — not just understand them conceptually. The difference between knowing what psychological safety is and being able to create it under pressure is practice. Repeated, uncomfortable, real-stakes practice.
•Peer learning, not expert delivery. The leaders navigating this best are learning from each other — not from someone who has all the answers, because nobody does yet. Creating structured space for that lateral learning is one of the highest-leverage investments an organization can make right now.
•Individual assessment before generic programming. Understanding how a specific leader processes ambiguity, handles relational friction, and makes decisions under uncertainty — through tools like Hogan, DISC, or Predictive Index — tells you where the development work actually needs to go. Generic AI leadership frameworks miss the individual entirely. And it is always the individual who either holds or drops the container for their team.
And above all: treating AI transformation as an organizational development challenge, not a training and enablement challenge. The difference is not semantic. It is the difference between deploying content and redesigning how the organization actually operates.
BCG's research is unambiguous — 70% of AI value comes not from the technology, but from how people and organizations are built to use it.
The Question I Think We Need to Sit With
We are investing heavily in helping our people use AI.
Are we investing anywhere near as much in helping our leaders lead through what AI is doing to their people?
In most organizations I have observed, the answer is no. And the gap between those two investments is where trust erodes, where high performers quietly disengage, and where the genuine promise of AI transformation slowly runs out of steam.
The organizations that will get this right are not the ones with the most sophisticated AI stack. They are the ones with leaders who can hold ambiguity without performing certainty, speak honestly when things are unclear, stay in relationship under pressure, and create enough clarity for people to keep moving even when the destination keeps shifting.
That is not an AI problem.
It is the oldest leadership problem there is.
And it deserves a real answer.
Lynda Nguyen is an Executive Coach, Mediator, and Leadership Development Strategist based in the San Francisco Bay Area. She partners with enterprise organizations and high-growth companies to develop leaders at the level of capacity, not just competency. She is a Licensed HeartMath Corporate Trainer, PCC-credentialed coach, Braver Angels facilitator, and holds executive education from Harvard Law School in negotiation and mediation. She has coached 60+ TED and TEDx speakers on executive presence and storytelling.



