At this year’s Economist’s GC Summit in London in November, Anjali Dixon (Standard Chartered Bank) and I co‑moderated a roundtable on what it takes to build an AI‑enabled legal function that remains deeply human. Here are the themes that resonated most for general counsel (GC) and senior legal leaders.
Why this conversation, and why now
The roundtable brought together GCs and senior legal leaders to discuss how AI is reshaping the work of in‑house legal teams. We focused on the practical implications that matter at this moment in time. As one participant put it, the differentiator is not AI itself but how humans and technology intersect. From my work with clients across regions, including recent workshops in the Middle East, I have seen that courageous leadership and psychological safety are decisive. When teams feel safe to learn and to fail, adoption accelerates. Anjali observed a growing market consensus: organisations that get the human‑technology balance right will outperform.
Culture makes or breaks AI adoption
The attendees agreed on a core truth: culture, not tooling, determines speed and quality of uptake. Psychological safety is foundational. If people fear getting it wrong, they will not use the tools, and value creation stalls. We discussed the “leadership shadow” – what leaders say, do, prioritise, measure and reward – and how consistently it shapes behaviour. Functions that set a clear intent for AI, select tools aligned to that intent, and normalise small‑scale experimentation with set parameters see faster progress. In several teams, leaders publicly celebrated informed failures and learning, signalling that disciplined risk‑taking is expected, not punished.
The implication for GCs is straightforward – your messages, metrics and incentives are your operating system. If you want adoption, make it visible. Create recognition for bottom‑up initiatives, set premiums for high‑demand tech skills, and provide access to AI‑focused CPD. Appoint change champions, but do not outsource leadership: bridge the gap between adopters and resisters with coaching, peer learning and practical success stories. Over the next two to three years, either model quality will improve or risk tolerance will evolve; in the meantime, manage hallucination risk actively and document what “good” looks like.
Future‑proofing talent means redefining excellence
The group was clear that the attributes of great lawyers are evolving, not disappearing. Curiosity, adaptability, accountability and ethical judgement are becoming non‑negotiable, and they need to be explicit in job descriptions, capability frameworks and performance systems. AI fluency should be a stated requirement for hiring and promotion, backed by structured training that blends technical instruction with practical application. Several leaders are already measuring AI confidence and competence by tracking usage frequency and output quality against meaningful baselines, assessing whether lawyers can identify when AI is helpful, when it is risky, and how outputs should be validated.
For GCs, the “so what” is twofold. First, make AI proficiency visible in your people processes – do not leave it as implied. Second, tie training to real work. We heard examples of teams using Copilot to analyse meetings retrospectively, surfacing missed questions and decision gaps. This is not about replacing soft skills but reinforcing them. Emotional quotient (EQ) remains intrinsic to risk management; relationships and communication prevent costly missteps that no tool can catch. Reward curiosity and practical experimentation alongside core legal outcomes to signal the new standard of excellence.
Operating models are getting leaner and more productised
Across the room, legal functions reported moving towards leaner models that “de-lawyer” routine work, reserving human judgement for complex, high‑stakes issues. In some cases, teams are piloting AI as an alternative to immediate hiring, accepting a temporary rise in external spend while they test value over 12 months. Most participants expect external counsel costs to rise even with AI, though the pricing basis is shifting. We are seeing experiments with productised offerings in place of pure hourly billing, particularly where repeatable outputs can be standardised.
The implications are pragmatic. If you are considering headcount, test whether AI and process redesign can address the need first, with defined evaluation periods and success benchmarks. Push your panel firms to articulate productised options where appropriate and to provide training that helps your team realise value from tools you already licence. Deliberately move routine work down the value chain, and reinvest capacity in higher‑order risks and stakeholder engagement.
Incentives and measurement must be practical and visible
Consensus formed around the value of bottom‑up, practical incentives. Several teams are using recognition awards for innovation, premiums for tech skills and AI‑focused CPD curated with law firm partners. Change champions help sustain momentum, but incentives work best when tied to clear, proximate outcomes. Measurement needs to go beyond counting licences. Participants advocated tracking adoption against baselines and assessing the quality of outputs and the judgement shown in tool use. Current risk concerns – especially hallucinations – should be managed explicitly, with review protocols and clear escalation paths. Many expect that within two to three years either the models will reduce these risks materially or organisational risk tolerance will adjust as confidence grows.
For GCs, this means treating AI adoption as a capability build, not a procurement exercise. Define what you will measure, share progress visibly, and link recognition to behaviours you want to scale. Keep risk management close to the work by embedding validation steps and clarifying who signs off in which contexts. Over time, this builds institutional confidence and a realistic understanding of where AI adds value – and where it does not.
Human connection remains the differentiator in risk
A recurring theme was the intrinsic link between EQ and risk. While AI can surface information and suggest pathways, it cannot substitute for the human relationships and contextual judgement that avert misalignment and reputational harm. One GC noted that using meeting analytics to identify missed questions is useful, but the follow‑through – closing loops with stakeholders, addressing concerns, and calibrating tone – depends on human connection. For lawyers seeking to retain influence, the path is to demonstrate aptitude with AI tools, navigate complexity and show willingness to adapt, while preserving the trust‑building behaviours that drive better outcomes.
The takeaway is to treat soft skills and AI literacy as complementary. Invest in both and be explicit that influence in the function depends on a blend of technological fluency and human judgement.
Conclusion
In a market where tools are converging fast, the advantage will sit with legal teams that put humans firmly at the helm: leaders who set clear intent, create psychological safety, and reward curiosity and disciplined use of AI. The message from our discussion was pragmatic and consistent – embed AI fluency in how you hire, develop and measure; redesign the operating model to de-lawyer routine work; and keep EQ at the centre of risk and stakeholder trust. Do that, and you will not only manage today’s risks but compound value as models mature and your teams’ confidence grows.
With thanks to co‑moderator Anjali Dixon and the GCs and senior legal leaders who contributed their insights under Chatham House rules.