Artificial intelligence is already reshaping the modern workplace. From automating routine tasks to supporting decision-making and improving access to support services, AI promises significant productivity gains for UK employers. Yet as organisations rush to deploy new tools, a growing body of evidence suggests that technology alone will not deliver sustainable results.
At a recent parliamentary roundtable hosted by the Policy Liaison Group (PLG) on Workplace Wellbeing, employers, policymakers and experts from across the tech and wellbeing sectors reached a clear conclusion: the success of AI at work will depend far less on the sophistication of the systems being introduced, and far more on how organisations support people through change.
Why workplace culture matters more than the technology
One of the central themes emerging from the roundtable was that organisations with strong, people-centred cultures are consistently more successful in adopting AI. Where trust is high, learning is supported and wellbeing is embedded, employees are far more likely to engage with new tools positively.
Conversely, in workplaces where trust is low, training is inadequate or wellbeing is treated as an afterthought, AI adoption often leads to increased stress, disengagement and resistance. Rather than improving productivity, poorly implemented AI risks amplifying existing cultural problems and creating new sources of anxiety.
Participants stressed that productivity gains are ultimately realised through people, not systems. AI may accelerate processes, but it is human judgement, creativity and confidence that determine whether those efficiencies translate into real performance improvements.
AI and the UK productivity challenge
The discussion also placed workplace AI adoption within the broader context of the UK’s long-standing productivity challenges. While there is increasing pressure on organisations to innovate, participants cautioned against approaches that rely on excessive or poorly targeted regulation, which could stifle experimentation and slow progress.
Instead, the roundtable highlighted the importance of strong corporate governance principles that balance profit and people. Employers were encouraged to create environments where employees feel safe to experiment with AI tools, learn through trial and error, and openly discuss what is and isn’t working.
This kind of culture, participants argued, is essential if AI is to help remove friction from everyday work rather than introduce new layers of complexity or fear.
The wellbeing risks of getting AI adoption wrong
Unlike previous waves of workplace technology, AI systems evolve rapidly and place new cognitive and emotional demands on employees. Minimal training approaches – often justified by the pace of technological change – were widely seen as unrealistic and counterproductive.
Expecting employees to “just figure it out” can leave people feeling overwhelmed, nervous and disengaged. Over time, this not only undermines wellbeing but also erodes confidence and trust in leadership.
The group was clear that treating wellbeing as an add-on risks undermining the very productivity gains AI is meant to deliver.
Augmentation, not automation – for now
While public debate around AI often focuses on job losses, participants at the roundtable emphasised that, in the near term, AI is primarily about augmenting human work rather than replacing it.
Used responsibly, AI has the potential to reduce administrative burden, support decision-making and enhance autonomy. For example, AI-powered tools are already being used to help employees manage financial stress or access wellbeing support discreetly and at scale.
However, participants were also clear-eyed about the longer-term implications. Some roles will inevitably be disrupted over time, and in some cases significantly. This makes early planning for reskilling and career transitions essential.
Employers and policymakers were urged to invest now in leadership capability and workforce development, ensuring that employees are supported as jobs evolve rather than left behind by change.
Trust, transparency and keeping humans in the loop
A recurring message throughout the discussion was the central role of trust. For employees to engage confidently with AI, they need transparency about how tools are being used and how decisions are made.
Participants warned against using AI in ways that undermine trust, such as monitoring mental health or performance without clear consent or safeguards. While AI can support wellbeing, its misuse risks encouraging disengagement, with employees “taking their brain out of the loop” rather than actively contributing.
Clear guardrails, ethical oversight and a commitment to keeping humans firmly involved in decision-making were seen as essential to protecting both wellbeing and outcomes.
Wellbeing as a precondition for successful AI
Gethin Nadin, Chair of the Policy Liaison Group on Workplace Wellbeing, summed up the discussion by noting that debates around AI too often swing between hype and fear.
“What’s missing is a focus on how technology actually interacts with human wellbeing at work,” he said. “The evidence is increasingly clear that wellbeing is not an add-on to AI adoption, it is a precondition for success.”
He emphasised that high-performing organisations with strong, people-centred cultures are far more likely to implement AI effectively, because integration relies on trust, engagement and support — not just software capability.
Building psychologically safe workplaces for the AI era
Simon Greenman, Partner and Head of AI at Best Practice AI, highlighted the unprecedented complexity of the technology now entering workplaces.
“This is probably the most complex software ever introduced into the workplace, and there is no manual for it,” he said. “Expecting people to master AI with minimal training leaves employees feeling overwhelmed, nervous and disengaged.”
He reinforced the view that while AI is currently more about augmentation than automation, success depends on whether organisations create the psychological safety people need to engage with it confidently.
Embedding wellbeing from the outset
The overarching conclusion from the roundtable was clear: embedding wellbeing, trust and psychological safety from the outset is essential if AI is to improve job quality and performance.
Employers who view wellbeing as a “nice to have” risk accelerating burnout and disengagement at precisely the moment they need creativity, learning and adaptability. Those who put people at the heart of AI adoption, however, stand to unlock not just productivity gains, but healthier, more resilient workplaces fit for the future.
As organisations across the UK continue to explore the potential of AI, the message from policymakers and experts alike is unequivocal: augmentation over substitution, and people over platforms, must be the guiding principle.
You might also like:









