Employers are taking a “serious strategic risk” if they don’t consider AI hand in hand with Health and Wellbeing, so says Louise Aston, former Wellbeing Director at Business in the Community, and now a Wellbeing consultant.
Here we consider 5 psychosocial risks of AI and how to mitigate them.
Risk 1: the anxiety and fear being caused by AI
With major firms like Amazon, Microsoft, Vodafone, and now the UK’s Big Four accountancy firms automating entry-level roles, valid fears about job loss are rising. But even where jobs aren’t at risk, the uncertainty is causing stress.
“The real stressor isn’t AI,” says Aston. “It’s the uncertainty. Ambiguity breeds stress and lack of understanding, and those lead to disengagement and even pushback. Employees need clarity on how AI fits into their jobs and how it can augment their role rather than making them redundant. That makes clarity and confidence absolutely critical.”
Mitigation:
- Build AI literacy: Train staff on safe, effective use of AI tools and how it supports (not replaces) their role
- Embed wellbeing into AI strategy: Risk-assess impact on workload, autonomy, and stress
- Lead with transparency: Create psychologically safe environments where people can raise concerns and explore AI together
Risk 2: Employers having higher expectations of employees delivering more but without more real support
There’s a danger that access to AI leads to unreasonable productivity targets.
“It becomes a race to the bottom,” warns Dr Shaun Davis, Chief Adjudicator at the British Safety Council, and Group Safety, Health and Wellbeing Director at Belron. “The more people use it, the more the quality goes down and the less satisfied in their work people become. Don’t trade off talent for a quick AI win. It will backfire.”
The backfiring is already being seen, according to research, in the form of ‘AI burnout’. Around 77% of workers in the US, UK, Australia and Canada, for instance, feel that AI has decreased their productivity and increased workload, according to Business Insider. Other research cited by the FT argues that AI is actually eroding collaboration between colleagues and leading to increased isolation, information overload and mental fatigue.
Often, the issue is poor implementation: Upwork reports workers now spend 39% of their time checking AI outputs, and 23% figuring out how to use the tools.
Mitigation:
- Train and support: Ensure consistent, confident use of AI
- Manage expectations: Adjust productivity goals realistically
- Watch for burnout: Track emotional wellbeing and capacity
- Prioritise human connection: AI can’t replace compassion
Risk 3: Isolation being caused by AI
Overreliance on AI tools can reduce social interaction. A Journal of Applied Psychology study links high AI use to loneliness, insomnia, and even increased alcohol use.
As Davis says: “I can’t imagine coming into work and not having humans around to fill up my soul bucket with a bit of humour, or an arm around my shoulder if I needed it.”
Matt Grisedale, Senior People Champion, energy firm E.ON, agrees, arguing that “AI cannot do compassion or collaboration”:
“It can’t understand that a human made a mistake because they were struggling with problems at home, or had a death in the family. We need to build roles on those people-related elements. We need to use AI to reignite passion in human skills like collaboration and have conversations so we can understand where different people are coming from.”
Mitigation:
- Create space for collaboration that’s not AI-driven
- Build community and trust into teams
- Encourage dialogue and difference to avoid groupthink
Risk 4: Creativity loss
Spiritual teacher Eckhart Tolle says that true creativity and solutions to problems arise from a state of inner stillness. But if we’re always asking ChatGBT for guidance, how can we find that stillness?
As Davis jokes:
“Isaac Newton sat under a tree and an apple feel on his head. He then reflected on gravity and that fact that what goes up, must come down… leading to new insights. If he’d had ChatGBT what would it have said? Perhaps it would have cut his thinking short and just told him ‘apples fall from trees’ and that would have been taken as gospel and the end. Slightly silly example but what I’m getting at is these great thinkers became so because they thought for themselves.”
Mitigation:
- Encourage critical thinking and questioning
- Ask AI for alternative views
- Treat AI as a springboard, not a shortcut
Risk 5: Deskilling and loss of agency
AI can erode personal agency and satisfaction.
“Answering a question with ChatGPT isn’t the same as the pride I had in researching my PhD,” says Davis. “Doing the research myself and having to review the literature meant I developed my own point of view. There’s a sense of satisfaction in that.”
With AI taking over so many of our day-to-day jobs, or doing them at the activation of our voices, we risk losing the ability to function well without it. We also risk feeling out of control if AI doesn’t work for any reason, a feeling that is well documented as linked to poorer wellbeing.
Davis says it doesn’t take much to imagine a situation in which humans have “become completely paralyzed because they don’t know how to do any of the things they might have been able to do in the past” from putting a wash on at home to using machinery at work.
But the biggest risk, potentially, in terms of deskilling is humans losing their ability to think. Just as many of us have lost our ability to read a map and remember directions with the advent of mobile sat navs, the same could happen with thinking.
Dr Robina McCann, VP Health at AngloAmerican, agrees, saying that the biggest “danger” of AI, for her, is “if we switch our brains off”:
“Brains are like a muscle. They need to be used. If we don’t use them, we will lose that ability. We need critical thinkers who can look at AI and question it. People need to understand AI is not ‘the’ silver bullet. It’s not going to fix everything. I will absolutely continue to prioritise my creative, human thinking after which I will consult with AI to get its opinion, too.”
Davis says that his many worry is that “there are many people out there turning to AI to get a view which they then regurgitate out as their own”.
The result? A potential veering towards homogeneity as AI starts to dictate opinions on a mass level and, possibly, a loss of some of our humanness and, therefore, identity. (Don’t get me started on what that means for acceptance of difference – that’s for another piece!).
Mitigation:
- Keep humans in the loop
- Promote hands-on learning and self-directed work
- Help staff stay connected to their skills and value
As Descartes (kind of) said: We think, therefore we are. In an AI-driven age, protecting our ability to think might be our most vital wellbeing strategy of all.
You might also like: