At The Economist’s GC Summit in London in November 2025, Anjali Dixon and I co‑moderated a roundtable on what it takes to build an AI‑enabled legal function that remains deeply human. Here are the themes that resonated most for general counsel (GC) and senior legal leaders in the room.
Our roundtable brought together GCs and senior legal leaders to discuss how AI is reshaping the work of in‑house legal teams. Set against an agenda dominated by AI – from tools and use cases to governance – the focus of our discussion was the human element. As one participant put it, the differentiator is not AI itself but how humans and technology intersect. From my work with clients across regions, including recent workshops in the Middle East, I’ve seen that courageous leadership and psychological safety are important attributes for leading in an AI augmented world. When teams feel safe to learn – and to get things wrong now and then – adoption accelerates. There is a growing market consensus that organisations which strike the right balance between human capabilities and technology will outperform.
Culture makes or breaks AI adoption
The attendees agreed on a shared simple truth: culture, rather than tools, determine the speed and quality of uptake and psychological safety is essential. If people fear getting it wrong or worry that it’s a threat to their jobs, they will not use the tools, and adoption stalls. We discussed the ‘leadership shadow’ – what leaders say, do, prioritise and measure – and how it signals to a team what the leader believes is important. Legal functions that set a clear intent for AI, select appropriate tools, and normalise small‑scale experimentation see faster progress. In several teams, leaders publicly celebrated failures and learning, signalling that disciplined risk‑taking is encouraged.
For GCs, this boils down to where you focus, the metrics you set and the incentives you offer. If you want adoption, make it visible. Create recognition for bottom‑up initiatives, reward team members who experiment, and provide access to AI‑focused CPD. Appoint change champions, but don’t outsource leadership: bridge the gap between adopters and resisters with coaching, peer learning and practical success stories.
Future‑proofing talent means redefining excellence
The group was clear that the attributes of great lawyers are evolving. Curiosity, adaptability, accountability and ethical judgement are becoming even more critical, and they need to be explicit in job descriptions, capability frameworks and performance systems. AI fluency should be a stated requirement for hiring and promotion, backed by structured training that blends technical instruction with practical application. Several leaders told us they’re already measuring AI confidence and competence by tracking usage and output quality against meaningful benchmarks, assessing whether lawyers can identify when AI is helpful, when it is risky, and how outputs should be validated.
For GCs, the ‘so what’ is twofold. First, make AI proficiency visible in your people processes – don’t leave it as implied. Second, tie training to real work. We heard examples of teams using Copilot to analyse meetings retrospectively, surfacing missed questions and decision gaps. This is not about replacing human skills, it’s reinforcing them.
Operating models are getting leaner and more productised
Across the room, participants reported moving towards leaner models that ‘de-lawyer’ routine work, reserving human judgement for more complex, high‑stakes issues. In some cases, teams are piloting AI as an alternative to immediate hiring, accepting a temporary rise in external legal spend while they test and validate to see what truly works.
If you are considering headcount, first test whether AI and process redesign can eliminate the low value but time-consuming demands on highly skilled lawyers. Push your panel firms to articulate how they are using AI to serve your instructions more effectively and ask them to provide training that helps your team realise value from tools you already licence. Deliberately move routine work down the value chain and reinvest capacity in higher value activities and business engagement.
Human connection remains the differentiator in risk
A recurring theme was the intrinsic link between emotional intelligence and risk. While AI can surface information and suggest pathways, it cannot substitute for human relationships and judgement. One GC noted that using meeting analytics to identify missed questions is useful, but the follow‑through – closing loops with stakeholders, addressing concerns, and calibrating tone – depends on human connection. For lawyers seeking to retain influence, the path is to demonstrate aptitude with AI tools, navigate complexity and show willingness to adapt, while preserving the trust‑building behaviours that lead to better outcomes.
The takeaway is to treat human skills and AI literacy as complementary. Invest in both and be explicit that the function depends on a blend of technological fluency and human judgement – and that neither works as well without the other.
Conclusion
In a market where tools are converging fast, the advantage will sit with legal teams that put humans firmly at the helm: leaders who set clear intent, create psychological safety, and reward curiosity and disciplined use of AI in day-to-day work.
The message from our discussion was pragmatic and consistent – embed AI fluency in how you hire, develop and measure the performance of your team. Consider which low risk tasks could be better completed by AI and ensure your team have processes for thorough review of outputs. Thoughtful use of AI will ultimately mean that your lawyers can focus on higher value risk mitigation and business growth activities. Do that, and you will not only manage today’s risks, but compound value as AI tools mature and your teams’ confidence grows.
With thanks to co‑moderator Anjali Dixon and the GCs and senior legal leaders who contributed their insights.