By Lindsay Lucas
In June I attended the AI Summit at London Tech Week 2025, which brought together global tech leaders to explore real-world AI applications, with a strong focus on autonomous systems, ethical innovation, and practical business solutions.
The AI Conversation Has Evolved
Reflecting back on the Summit, I was struck by how much the conversations have moved on. The focus has shifted from the hype around how transformational artificial intelligence could be, to the realities of how we can apply it in real-world business scenarios.
It will come as no surprise that data took centre stage in many of the sessions. The reality that bias still exists within our data sets, and how we begin to mitigate that, was a recurring theme. There is a clear understanding that we must avoid building or expanding large language models that are predicated on data fundamentally skewed by historical bias. This is not a simple fix. Addressing bias in artificial intelligence systems means grappling with how we want these models to operate, while also reflecting on the societal norms of the past.
We have moved forward and learned from history. I felt that there was a shared sense of responsibility in the room, that our models need to reflect our progress, and not perpetuate outdated thinking. Synthetic data has a big role to play here, but so does human understanding. If we do not truly understand the data we are working with, we cannot create fair, representative systems.
Data Bias and the Role of Synthetic Data
One of the most insightful talks I attended was by Dr Ian Brown from SAS, who spoke about the use of Generative Adversarial Networks (GANs) in generating synthetic data to address bias. GANs are a machine learning framework where two neural networks are set up to challenge each other, one generates data and the other tries to spot whether it is real or synthetic. If the bias in the original data is well understood, synthetic data created through GANs can help balance the representation and improve outcomes.
Some organisations are already using synthetic data to generate more examples in scenarios where there is underrepresentation. In the financial services sector, for example, a lack of fraud cases in the data led to poor model performance and under-detection. By introducing synthetic data to increase examples from less common cases, model performance improved. Fraud detection increased by 6%, and false positives reduced by 6%. This is significant, especially when considering how under-reporting affects some communities due to cultural barriers, accessibility, and population density. Data used to predict risk in those areas may not reflect reality at all. The rise of artificial intelligence has brought renewed focus on data, its accuracy, its volume, its quality, and its bias. If we want AI to support us in making better decisions, we must first truly understand our data and the processes that shape it.

Why AI Won’t Fix Broken Processes
During the “Leadership Matrix: Choose to Transform” session, Laurie Fuller from Stanford University made a point that really resonated with me. She said that if an organisation does not have efficient, well-understood processes in place, artificial intelligence will only make existing issues worse. I think organisations can sometimes view technology as a silver bullet. But while it is often part of the solution, the human side, people and processes, should never be overlooked. This becomes even more important when adding automation, business intelligence or decision-making tools.
Alberto Prado, Global Head of R&D Digital and Partnerships at Unilever, spoke about the importance of becoming a data-first organisation. He also addressed the challenges that come with seeking return on investment from artificial intelligence. Boards often expect immediate results, but this technology moves at such a fast pace that long-winded sign-off processes can hold businesses back. In some cases, they risk becoming irrelevant. When his board asked for a business case for a new AI idea, Alberto said, “The business case is simple… if we don’t do this, we’re toast.” That really stuck with me. It speaks to the agility that today’s leaders need to have. We cannot wait around for perfection. Innovation requires momentum.
What Agile Leadership Looks Like in the AI Era
Laurie Fuller also reminded us that leaders need to lead by doing. If we want to inspire an innovative culture, we need to show up for it ourselves. She encouraged us to get hands-on: to study, to build, to prototype and to share what we are learning. If we want our teams to experiment, we need to make that safe and that starts with leadership.
As we move into this new era of rapid change, Nazim Unlu, Global HR Lead at Novartis R&D, pointed out that business models will pivot more frequently, and leaders will be supported by AI and by strong ecosystems of knowledgeable people around them. He predicted a more democratised workforce, where traditional hierarchies begin to flatten. The leaders who adapt to this new structure will be the ones who thrive.
This idea of adaptability was echoed by Magdalena Orascanin, CEO of Magnate HR, who said that successful leadership will rely heavily on how adaptable people are. She believes that change management and fear management will become the most important leadership skills we can develop. This makes sense in a world that is changing at such a fast pace, with AI disrupting markets and job roles overnight. As Ed Keelan said on the Start-up and Investor stage, “In a world where anyone can imitate anyone else’s go-to-market strategy, who wins?”
He spoke about how artificial intelligence is already disrupting the Software as a Service (SaaS) and Business Intelligence sectors. Some tools and platforms are becoming obsolete almost overnight, much like independent spellcheckers in the 1990s that were eventually absorbed into word processors and offered for free. In that context, I believe we need to equip our organisations—and our people—to be resilient, adaptive, and ready to stay relevant.
Skills will be key to that. One theme that came up repeatedly during the Summit was the importance of continuing to develop human communication and interpersonal skills. Our soft skills will continue to matter. There was a lot of discussion about the risks of outsourcing our thinking to AI. I agree. These tools are designed to enhance, not replace. Our role is to question the output, steer the models, and correct inaccuracies. That ability to challenge and guide is what makes the human role in AI essential.
I also found it interesting to think about how different generations are approaching AI. Ronnie Sheth, CEO of SENEN Group, observed that younger generations are already outsourcing elements of their thinking. Her concern is, if we lose the habit of thinking critically, where will we end up? She argued that the future must involve humans leading AI, not the other way around.

From AI Literacy to AI Fluency
Ronnie also made an important distinction between people who are AI literate and those who are AI fluent. AI literacy involves understanding the basics—how artificial intelligence works, what it can and cannot do, and how to apply it ethically. AI fluency, on the other hand, means being able to step back and see how different tools can be used innovatively, how to write high-quality prompts, and how to integrate AI into workflows effectively. A good comparison I heard was that being AI literate is like being able to read and understand a new language, while fluency means being able to speak and write it confidently.
The Importance of Lifelong Learning
Moving from literacy to fluency takes time. It also takes a different approach to learning. Nazim Unlu made an important observation here: when employees expect their company to give them learning opportunities, they are missing the point. Lifelong learning is not something that can be handed over—it is something we each need to own.
Eileen M. Vidrine, formerly Chief Data and AI Officer at the US Department of the Air Force, added to this during the discussion on rewiring work. She said we all need to find ways to learn something new every day. Organisations can support that by offering varied and accessible learning opportunities, but the motivation must come from within.

What the Future of Work Could Look Like
So where do I think all of this leads? For me, the future of work will be built around cross-functional teams. Job boundaries will become more fluid. Collaboration will take priority over hierarchy. Siloed departments will begin to break down. That means we need to think differently about how we structure our businesses, how we train and support our teams, what technologies we adopt, and how we shape mindsets to stay open to change.
We need resilient teams, agile organisations, and cultures that encourage experimentation. Curiosity will be key. The ability to challenge assumptions, ask better questions and remain open to new ideas will make all the difference.
Final Thoughts on AI and Inclusion
I want to leave you with one final reflection from Liz Williams, CEO of FutureDotNow UK, who said:
“If we leave anyone behind, that leaves us all behind.”
That really stayed with me and it’s why I believe that now, more than ever, we need to stay curious.