AI can crush Culture and Wellbeing: how do you ensure it doesn’t?

Robot working at computer among people. Maschine typing on keyboard in office. IT team of future. Futuristic worker. Humanoid work at call center. Support job. Selling concept. Technologies.

If you genuinely care about employee Health and Wellbeing, you cannot afford to ignore AI. In an ideal world, employers would be getting themselves educated about its implications and have, or at least be working towards, an AI strategy. 

But, as CIPD Chief Peter Cheese says, an AI strategy and policy is not enough on employers’ radars currently which is worrying, as time is running out with so many individuals using it unguided.

If we want to avoid the dystopian future painted in many a sci-fi film about robots taking over the world (this may not be as dramatic as it sounds) then we need to act now and employers are a vital part of that collective action.

Urgency looming

Cheese explains his urgency saying:

“People have said the changes we are seeing today with Generative AI were decades away. But suddenly they’re happening. Now. I’m currently reading a book about transhumanism – who’s to say that is not as far away as we thought? And all these changes have huge ethical implications that are not being considered enough by employers.”

Let’s rewind slightly to a more relatable example, like the use of Copilot to help you with your day to day workload. As Cheese explains, this type of AI is like your personal agent, building up knowledge about your habits, your behaviours, your patterns of work and the questions you’re asking it. 

“How do we manage this process and ensure that, whatever our Copilots are learning and coming up with, is going to help us and not fundamentally work against us?” he says. 

How do we trust AI?

“And of course – I’m talking about the longstanding concerns of scientists and scifi writers: what happens if these systems become cleverer than us? We need to be asking: how does AI work alongside us and not displace us? How do we trust it? We need to be redesigning jobs and creating new jobs in the age of AI that are good for people.”

When he talks to organisations about AI, he urges them to create a framework and give employees clear guidance on how to use AI responsibly. This means they need to understand how AI works, where biases come from, how you apply judgement to it and how you use the tools transparently. 

Doing this avoids a culture of false-productivity and extra pressure where employees pass off AI work as their own, creating a kind of ‘super human’ persona which helps no-one and leads straight to burnout.

Not a tool but a test of leadership

Ultimately, says Louise Aston former Wellbeing Director at Business in the Community and now consultant, “AI isn’t just a tool, it’s a test of leadership; it’s not a tech challenge, it’s a human one”. It concerns her greatly that, so far, the conversation around AI has been obsessed largely with productivity and competitive advantage, seeming to completely miss the seismic impact on employee wellbeing:

“The wellbeing aspect is imperative and should be a priority for business leaders but they seem to be completely overlooking this dimension and, by doing so, I believe they are taking a serious strategic risk.”

The “human side” of the equation is being neglected and must be “embedded” into employers’ AI strategies, she says. The employers that do this will be able to “empower their employees” and unlock higher levels of creativity and work, as well as wellbeing. But to achieve this utopia, “AI and wellbeing must go hand in hand”.

What employees need to know

Building on what Cheese says about what employees need to know in a policy, she adds they need to be clear of what they bring to the role – what she calls “the human factor”.

For her, the key “differentiator” of humans over robots in the workplace is that robots can’t “drive culture”. Why not? Because, she says, they can’t create psychological safety; the cornerstone of health and wellbeing at work:

“Culture is about people. It’s about human beings. It’s about purpose. It’s about trust. AI cannot do these things. That is really, really key. I can’t stress that enough.”

Short-term gains, long-term pains

She concedes that yes, in the short-term, using AI surreptitiously alongside your work in an unguided way may lead to improved personal productivity. But in the longterm this strategy is on a hiding to nothing because humans are “feeling” creatures who want a sense of purpose and motivation.

“Besides, I think the quality of work in this kind of future will just be really, incredibly boring,” she says; and this is a woman who knows what she is talking about on the psychology and cultivation of creativity and original thought, having formerly been Creative Director of the UK government’s inhouse marketing agency, the Central Office of Information (COI).

DEI gaps

Outgoing Global Head of Inclusion at a FTSE listed British multinational FMCG company, Tolulope Oke, has just come back from an event about ‘unlocking opportunities for the Black and Global Majority Community through Ethical AI’ when I speak to her. She is keen that AI is guided to close DEI gaps, like the disability gap and the employment gap, rather than widen them through embedded biases. 

AI consultancy is a key pillar of her new company, The Inclusive Experience Group, because she’s seen first hand, having worked at big brands like Diageo, Amazon and Sainsbury’s, the need for governance in this area. And she doesn’t believe that companies should rely on governments for this guidance as, with so many things, they are struggling to keep up with the pace of change:

“We don’t have a lot of global, regional or national standards on AI. There’s still a lot of gaps in regulation. And there’s not a lot in the UK specifically, and the UK actually refused to adopt a policy which would have given greater protections to people around AI and so we are still waiting to see what happens in terms of more regulation.”

Government lagging

Due to this governmental lag, it makes sense that employers – given the urgency already explained – create their own governance. But, like Cheese and Aston, Oke is worried that employers are missing vital considerations:

“We have to be really careful around governance and a lot of organisations have rushed, especially in the last 18 months, because they have had to adopt their own internal systems due to the fact their employees are already using open-source AI and putting confidential data out into the market.”

So what are they missing? 

We’ve already discussed the “human factor”, mentioned by Aston, but Oke says another major ‘miss’ is the social impact on marginalised, vulnerable communities in the Global South. She’s concerned that if not managed well, AI being used by the West could exacerbate inequality, gaps like those in literacy and job displacement, which raises serious ethical concerns for her – especially at a time of ‘ESG’ when companies are supposed to be considering their social impact.

“Yes we may be forging ahead in the West but are some people being left behind? We need equality of education and understanding,” she says. “AI may look sexy and shiny in the Western world but the economic impact in the Global South cannot be underestimated.”

Massive environmental impact

The lack of focus on the “absolutely massive” environmental impact of AI, due to the huge data centres necessary to process all the data, also disturbs her. She expects to see a “shift in the ESG index” as companies grapple with uplifted carbon footprints as a result. The problem is, she says, the data scientists and experts involved with programming and creating AI aren’t talking about these challenges. 

“They’ll talk to you in acronyms and tech jargon but not about the social impacts and the environment because that’s not their remit or their personal concern,” says Oke. “That’s why people in roles like mine are so important because we look at the bigger picture.”

People that work in Health and Wellbeing and DEI are often driven by a mission to look at the cost of productivity, not only on employee wellbeing, but also the planet’s.

As Oke says, she “always adds [her] actual, real world, diverse, inclusive experience and critical thinking to help interpret any data”.

As she says, for now, “AI can’t do that. AI can’t think the way I think, and that’s what gives me the value add.”

But, as the CIPD’s Cheese said at the start of this article, the important words are “for now”.

As we also said at the beginning, if you genuinely care about employee health and wellbeing, you cannot afford to ignore AI. The question is: are you? 

You may also like:

LATEST Poll

FEATURED
Review Your Cart
0
Add Coupon Code
Subtotal

 
Logo

Sign up to receive Make A Difference's fortnightly round up of features, news, reports, case studies, practical tools and more for employers who want to make a difference to work culture, mental health and wellbeing.