Last week’s issue: Leading Through Constant Change
Welcome back to House of Leadership
We explore what it really takes to grow and lead successfully in a fast-paced, high-performance environemnt. Every week, we provide a core idea and practical actions to apply right away. If you want the deeper insights, frameworks, and templates that accelerate your career growth and leadership impact, please go premium.
Looking to start a newsletter? Use Beehiiv (it’s what we use)
How to Lead Through Technological Transformation Without Losing What Makes Us Human
What you'll learn today:
Why the AI conversation needs to start with people, not productivity
How to create space for experimentation without creating pressure to perform
Practical ways to preserve human judgment in an automated world
A framework for leading AI adoption that builds trust instead of fear
Run ads IRL with AdQuick
With AdQuick, you can now easily plan, deploy and measure campaigns just as easily as digital ads, making them a no-brainer to add to your team’s toolbox.
You can learn more at www.AdQuick.com
I was in a leadership meeting last week where someone asked: "How do we get our teams to actually use AI?" The conversation quickly turned tactical, training sessions, use case libraries, adoption metrics. All reasonable suggestions. But I couldn't shake the feeling we were asking the wrong question.
The real question isn't how we get people to use AI. It's how we help people navigate a moment where the tools they've mastered for years might become less relevant, where their expertise feels uncertain, and where they're being asked to partner with technology they don't fully understand or trust.
A manager told me recently that her team is quietly terrified. Not because they think AI will take their jobs tomorrow, but because they don't know what good looks like anymore. "I used to know if someone was excellent at this work," she said. "Now I'm not sure what excellence even means when half of it could be automated."
This is the conversation we're not having enough. We talk about AI adoption rates and efficiency gains, and competitive advantage. We talk about prompt engineering and tool selection, and workflow optimization. All important. But we're not talking enough about what it feels like to be a person in the middle of this shift, to wonder if your hard-won skills still matter, to feel pressure to be "AI-savvy" without knowing what that means, to navigate the gap between the hype and the reality.
If we want to lead through this moment well, we need to start with the human experience, not the technology roadmap.
Here's what that looks like in practice.
1. Create Permission to Experiment Without Pressure to Perform
There's a paradox happening right now. Leaders are saying "explore AI, try things, learn" while simultaneously sending the message that everyone should already be using it productively. This creates a strange dynamic where people feel behind before they've even started.
Real experimentation requires psychological safety, the freedom to try things that don't work, to ask basic questions, to move at your own pace.
What this looks like:
Explicitly separate exploration from expectation: "This quarter is about learning, not proving productivity gains"
Share your own clumsy attempts and failed experiments, model that this is genuinely new for everyone
Create low-stakes ways to practice: dedicated time for AI experiments that don't have to produce business results
Celebrate interesting failures as much as successes: "What did you try that didn't work? What did you learn?"
One director I know started "AI office hours" where anyone could drop in with questions or demos, no matter how basic. The rule was simple: there are no stupid questions, and nothing has to be polished. It shifted the energy from performance anxiety to genuine curiosity.
Fact-based news without bias awaits. Make 1440 your choice today.
Overwhelmed by biased news? Cut through the clutter and get straight facts with your daily 1440 digest. From politics to sports, join millions who start their day informed.
2. Protect Human Judgment at Decision Points
The seductive promise of AI is that it can make decisions faster and remove human bias. But here's what we're learning: AI is excellent at pattern recognition and terrible at context, judgment, and nuance. The question isn't whether to use AI in decision-making, it's where to insist on human input.
What this looks like:
Map your critical decision points and ask: "What would we lose if this were fully automated?"
Create explicit "human checkpoint" rules: AI can draft, recommend, or analyze, but humans decide when stakes are high
Teach your team to interrogate AI outputs: "What assumptions is this making? What context is it missing?"
Preserve space for intuition and experience—the things that aren't in the training data
A hiring manager recently told me she uses AI to screen resumes but always reads the "maybes" herself. "The AI catches obvious fits and obvious misses," she said. "But it misses the interesting outliers—the career changers, the unconventional backgrounds, the people who'd bring something we didn't know we needed."
That's the judgment worth protecting.
3. Be Honest About What's Actually Changing
There's a tendency to either overhype AI (it'll revolutionise everything overnight) or dismiss it (it's just a tool like any other). Both create confusion. People need honest assessments of what's genuinely shifting and what's staying the same.
What this sounds like:
"Yes, some routine parts of your role will likely get automated. Let's talk about what that means and how we'll help you grow into higher-value work."
"No, AI can't replace the client relationship you've built over five years. That expertise is more valuable now, not less."
"I don't know exactly how this will play out. What I do know is that we'll figure it out together."
One leader I admire runs quarterly "state of AI" sessions where she shares what she's seeing, what she's uncertain about, and what she's committed to protecting (like their team's creative culture and collaborative way of working). It's not a roadmap—it's a conversation. And that honesty builds more trust than any perfectly crafted change plan ever could.
The Bottom Line
Leading through AI adoption isn't primarily a technology challenge. It's a people challenge wrapped in technology language.
Your job isn't to turn your team into AI experts overnight or to maximize adoption metrics. Your job is to create the conditions where people can learn, adapt, and grow without losing the human qualities that make them valuable in the first place—their judgment, their relationships, their creativity, their ability to understand context and nuance.
The organizations that will thrive aren't the ones that adopt AI fastest. They're the ones that figure out how to pair human wisdom with technological capability in ways that amplify both.
That's leadership work. And it starts with staying human yourself.
All the best!


