
Enhancing negotiation skills using concepts from “Thinking Fast and Slow”.
March 8, 2025A straight-talking guide for leaders navigating a world where machines are doing more of the thinking
There is a conversation happening in boardrooms, leadership development programs, and management retreats across the world right now. It goes something like this: “What does leadership even look like when AI is handling half the work?”
It is a fair question. And honestly, most of the answers being thrown around are either too vague to be useful or so loaded with jargon that they say everything and nothing at the same time. “Be adaptable.” “Embrace innovation.” “Lead with empathy.” Sure, all true. But none of that tells you what to actually do differently on a Monday morning when your team is using AI tools you barely understand, making decisions at a speed you have never seen before, and looking to you for direction on what any of it means.
So let us cut through the noise.
AI is not going to replace good leadership. But it is absolutely going to separate the leaders who thrive from those who just survive. And the line between those two groups is not drawn by who knows the most about machine learning. It is drawn by something more fundamental — by a set of competencies that have always mattered but now matter even more, and a few entirely new ones that have quietly moved to the top of the list.
Here is what those competencies actually are.
1. Learning Agility — And We Mean Real Learning, Not Just the Comfortable Kind
Every leadership article written in the last decade has mentioned “learning agility.” Most of them have been talking about willingness to learn new industry trends, new management techniques, new strategic frameworks. That kind of learning is fine. In an AI-driven organisation, though, it is not nearly enough.
Real learning agility in this context means being genuinely comfortable not knowing things. It means sitting in a meeting where someone explains how a large language model processed your customer data and produced a recommendation, and being okay with the fact that you do not fully understand the mechanics — while also being curious enough to ask good questions and smart enough to challenge the output when something feels off.
Most leaders have spent their careers building expertise. Their authority often rests on knowing more than the people around them. AI fundamentally disrupts that dynamic. Suddenly, a 24-year-old data scientist understands the tools better than the 50-year-old executive. The executive’s value can no longer come from knowing more. It has to come from asking better questions, making better judgments, and creating the conditions for the right decisions to be made.
Leaders who struggle with this — who fake understanding, who avoid conversations where they might look ignorant, who dismiss AI capabilities because accepting them would feel threatening — are going to find the next decade very difficult.
The ones who lean into genuine curiosity, who are willing to say “explain that to me again,” who can learn from people junior to them without making it weird — those are the leaders who will be trusted when things get complicated.
2. Human Judgment at Scale
Here is the thing about AI: it is remarkably good at processing information and finding patterns. What it is not good at is making the kind of judgment calls that require understanding context, nuance, history, relationships, and values. That gap is where leaders earn their keep.
But in an AI-driven organisation, leaders are not just making judgment calls occasionally. They are making them constantly, quickly, and at a scale that was not possible before. AI surfaces more decisions, faster, than any human team could generate on its own. That means the bottleneck shifts from “finding information” to “deciding what to do with it.”
Good judgment has always been a leadership virtue. What changes in an AI-driven environment is the volume and velocity at which that judgment is demanded. Leaders need to make consequential calls on incomplete information, move on, and be willing to revisit when new data arrives — all while maintaining the confidence of their teams and the trust of their stakeholders.
This requires a kind of mental discipline that does not come naturally to everyone. It requires being able to distinguish between decisions that genuinely need careful deliberation and those that just need to be made. It requires confidence without arrogance — the ability to stand behind a call while remaining genuinely open to being wrong.
If you have ever worked with someone who is perpetually in analysis mode, always waiting for more data before they commit, you know how costly that trait can be. In an AI-rich environment, it becomes even more costly, because AI will always be able to give you more data. The leaders who can cut through it — who can say “we have enough, here is the call” — are going to be invaluable.
3. Knowing What to Ask the Machine (and What Not To)
This is a competency that does not show up on many leadership development frameworks yet, but it is quickly becoming one of the most important. Call it AI fluency, or machine intuition, or just good judgment about where algorithms should and should not be trusted. Whatever you call it, it matters enormously.
AI systems are extraordinarily helpful until they are dangerously wrong. They can produce confident-sounding outputs that are factually incorrect, subtly biased, or technically accurate but contextually inappropriate. A leader who does not understand this — who treats AI output as truth by default — is going to make bad decisions and, worse, not know why.
On the flip side, a leader who is so sceptical of AI that they never trust its outputs is going to be consistently outpaced by competitors who have figured out when to rely on it.
The competency here is about developing a calibrated sense of trust. When should you lean on the model’s recommendation? When should you override it? When should you not even ask it? These are questions that require a genuine understanding of what AI systems are good at and what they are not — not at a technical level, but at a practical one.
Leaders who develop this kind of fluency become remarkable force multipliers. They can harness AI’s capabilities without being captured by its limitations. They know when the tool is helping and when it is creating a false sense of certainty. That skill — which is really a blend of domain expertise, critical thinking, and hard-won experience — cannot be automated. It can only be developed by leaders who take the time to actually work with these tools rather than delegate all engagement with them to someone else.
4. Managing People Whose Work You Cannot Fully Evaluate
This is one of the more quietly uncomfortable realities of leading in an AI-driven organisation: increasingly, you are going to be responsible for the output of work you do not entirely understand.
When a data scientist uses a complex AI model to generate a business insight, how do you assess the quality of their work? When a developer builds an automated decision system, how do you evaluate whether it is well-designed? When your marketing team uses generative AI to produce content at scale, how do you maintain quality standards?
Traditional management assumed that leaders had at least enough domain knowledge to evaluate what their team was producing. That assumption is breaking down fast. Leaders need to find new approaches to evaluation — ones that focus on outcomes rather than processes, on the quality of thinking rather than the specifics of execution.
This also changes how leaders coach. If you do not understand the tools your team is using, you cannot give technical guidance. What you can do is ask good questions about approach and decision-making. You can create an environment where people explain their reasoning rather than just presenting conclusions. You can set standards for intellectual rigour and ethical consideration that apply regardless of the specific tools involved.
The leaders who navigate this well are the ones who are honest about the limits of their own expertise, who build teams with complementary knowledge, and who are comfortable exercising authority over domains they do not fully master — because their value lies in direction, judgment, and accountability, not technical knowledge.
5. Ethics as a Daily Practice, Not a Quarterly Review
For years, organisational ethics has been largely reactive. Something goes wrong, there is a review, policies are updated, training is rolled out. In an AI-driven organisation, that approach is dangerously inadequate.
AI systems embedded in business operations can cause harm at scale, quickly, invisibly, and in ways that are hard to trace. Hiring algorithms that discriminate. Pricing systems that exploit vulnerable customers. Content moderation tools that silence legitimate voices. Customer service bots that provide misleading information. None of these things happen with malicious intent. They happen because no one was asking the right questions at the right moment.
Leaders in AI-driven organisations have to make ethical reasoning a daily practice — something that happens in every product decision, every data decision, every deployment decision. It cannot be outsourced to a compliance team or handled in an annual training module.
This requires leaders who have developed genuine moral clarity — who can articulate what their organisation stands for and use that as a practical decision-making lens. It also requires psychological safety within teams, because the people most likely to spot an ethical problem with an AI system are often the most junior people in the room. A culture where those voices are heard and taken seriously is not just morally right; it is practically essential.
Leaders who treat ethics as an inconvenient constraint are going to find themselves answering for AI failures they could have prevented. Leaders who build ethics into how decisions are made — who model the behaviour they want to see — are the ones whose organisations will earn and keep trust over the long term.
6. Translating Between Worlds
In every organisation that is serious about AI, there is a translation problem. The technical people understand the capabilities but often struggle to connect them to business priorities. The business people understand the strategy but often have unrealistic expectations about what AI can and cannot do. These two groups frequently talk past each other, and the cost is enormous — wasted investment, missed opportunities, and increasingly frustrated people on both sides.
The leaders who add the most value in this environment are the ones who can sit at that intersection and translate. Not because they are experts in both areas — that is rare — but because they have the curiosity and communication skills to understand enough of both worlds to connect them meaningfully.
This means being able to explain a machine learning model’s limitations to a CFO in terms that relate to financial risk. It means being able to articulate a business problem to a data science team in terms they can actually work with. It means facilitating conversations where both sides feel heard and where the output is practical alignment rather than mutual frustration.
The leaders who develop this translating competency become irreplaceable. They are the connective tissue of the AI-driven organisation, and without them, the most technically sophisticated systems in the world will underperform because the humans cannot figure out how to use them together.
7. Resilience — And Helping Others Build It
The pace of change in an AI-driven organisation is relentless. Systems get replaced. Roles change. Skills that were valuable become automated. Processes that worked are disrupted. And all of it happens faster than most people are psychologically equipped to handle.
One of the most important things a leader can do in this environment is help their people build resilience — the capacity to absorb change, recover from setbacks, and find meaning in work that is constantly shifting. This is not about cheerleading or pretending disruption does not happen. It is about helping people locate their value in things that endure — their judgment, their relationships, their creativity, their integrity — rather than in specific tasks or technical knowledge that might be automated next year.
Leaders who model resilience — who are transparent about uncertainty without being destabilised by it, who acknowledge difficulty without catastrophising it, who demonstrate that change is survivable — give their teams something to anchor to. That is a profound contribution, and it requires a kind of emotional maturity that is genuinely hard to develop.
It also requires leaders who genuinely care about the people they lead, not just as resources but as human beings whose working lives they have a significant influence over. In a world where AI is handling more and more of the task-based work, the quality of the human environment — the culture, the relationships, the sense of meaning — becomes more important, not less. Leaders who get this create organisations where people want to stay and bring their best work.
8. Strategic Clarity in a World of Infinite Possibility
AI expands what is possible in ways that are simultaneously exciting and paralysing. There are so many things you could automate, so many analyses you could run, so many products you could build. The challenge is not capability — it is focus.
Leaders in AI-driven organisations have to be able to provide genuine strategic clarity. Not just mission statements and value propositions, but a clear, consistently communicated sense of what the organisation is trying to do and why certain bets are worth making. That clarity is what allows teams to make good decisions independently — to say yes to the AI project that serves the strategy and no to the one that is technically impressive but strategically irrelevant.
This sounds obvious, but it is harder than it looks. AI creates enormous pressure to chase the shiny thing — the new tool, the impressive demo, the competitor use case that got a good press release. Leaders who lack strategic clarity get pulled in every direction. Leaders who have it can evaluate every new possibility against a consistent filter.
Strategic clarity in an AI context also means being honest about organisational readiness. AI transformation is not just a technical challenge — it is a human and cultural challenge. Leaders who overpromise and underdeliver on AI initiatives damage trust in ways that are hard to repair. The ones who set realistic expectations, communicate honestly about where they are on the journey, and celebrate genuine progress rather than performative innovation — those are the leaders who build organisations that can sustain transformation over time.
Putting It All Together
None of the competencies described here are entirely new. Learning agility, judgment, ethical reasoning, resilience — these have always been the marks of a great leader. What AI does is intensify the demand for all of them while adding a layer of technical complexity that leaders cannot afford to ignore.
The temptation for many experienced leaders is to treat AI as primarily a technology problem — to hand it to the IT department, hire a Chief AI Officer, and focus on what they know. That approach will not work. AI is changing the nature of work, the structure of decisions, the texture of culture, and the expectations of both customers and employees. All of that is leadership territory.
At the same time, the temptation for some leaders — particularly newer ones — is to lean so heavily on AI tools that they atrophy the human skills that make them effective. Using AI to draft every communication, to think through every decision, to simulate every scenario. The risk there is real: leaders who outsource too much of their thinking stop developing the judgment that makes them worth following.
The sweet spot is what you might call augmented leadership — using AI to extend your capabilities while deepening the distinctly human competencies that machines cannot replicate. Bringing curiosity and humility to tools you do not fully understand. Exercising courageous judgment on decisions that data alone cannot make. Building cultures of psychological safety where ethical concerns get heard. Creating environments where people find meaning even as their roles evolve.
That is what great leadership looks like in an AI-driven world. Not fundamentally different from what it has always looked like — just more demanding, more urgent, and more consequential than ever before.
The organisations that figure this out will not just use AI effectively. They will build something more valuable: a culture of leaders who are genuinely worth following, precisely because no algorithm could replace them.
The shift to AI-driven organisations is not primarily a technology story. It is a human story. And as with every human story, leadership is at the centre of it.



