Artificial Intelligence In Occupational Therapy: Distinguishing Between Hype And Reality

Arguably fuelled by IBM’s Deep Blue Program, the hype around artificial intelligence (AI) being used in the workplace has increased exponentially. But while enthusiasm for AI is driven by a thirst for learning, what is the ethical cost?

What Is The Definition Of Artificial Intelligence?

According to The European Commission’s High-Level Expert Group on AI: “AI systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal.

“AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.”

What we’re seeing at the moment within workplaces is “creative destruction”, which is a set of organisational resources seemingly underutilised requiring mutation to revolutionise economic structure. Essentially, destroying the old and creating the new. This is where AI is coming into play.

However, present discussions are abstract at best with the AI nebulous floating through a breeze of marketing lingo preferred to the definitive technical description for the end-user. The non-machine learned algorithms and AI are harmonised accomplices with consequences to ethics, social challenges and legal dilemmas resulting in the removal of human activities at the very time they are needed most.

What Is The Current State Of Play Of App-Based Solutions?

University of Cambridge researcher, William Fleming, analysed data incorporating 26,471 employees from 128 UK employers from Britain’s Healthiest Workplace survey. He found that well-being apps do little to improve employees’ mental health.

In 2015, the World Health Organisation (WHO) survey revealed 15,000 mental health apps were active and 29% focused specifically on diagnosis, support and treatment. This was before the COVID-19 pandemic, so it begs the question: how many more have been released and used since then?

A study found 278 specific mental health apps available on Apple iOS and Google Play Store, where 36.4% were updated within 100 days and 7.5% were not updated within 4 years. The tried and tested legal “rule of thumb” from the Health and Safety Executive, the UK regulator, is a “1-year date from risk assessment” which asks the question of the UK regulator: What is going on?

Could AI be a hindrance for companies?

Are Organisations Accepting or Misunderstanding Their Liability?

Approximately 17% of UK employees access Occupational Health clinicians annually, whilst employers use AI or algorithmic apps-based solutions for assessing employee well-being throughout entire organisations. However, this could make them liable from a legal standpoint.

Legislation that is designed to protect employees physical and psychological health include:

  • Health and Safety at Work Act 1974, Section 2—“duty of care”
  • Management Regs 1999—”competent person”
  • The Mental Health Act 1983—”only a clinician can assess the psychological health of an individual”
  • The Equality Act 2010

When the organisation is using an app to assess an employee’s physical or psychological health, the argument exists that this is a failure of the “duty of care” responsibility as a base requirement of a “competent person”. This is because, to date, no AI or algorithm holds the legal status of “personhood”.

When government departments use an app to assess an employee’s psychological health, they have also failed to meet the said requirement of the Mental Health Act 1983 to use a “qualified clinician”. This ultimately exposes the sub-divisional departments to GDPR or DPA2018 breaches and Human Rights Act 1998 violations.

The Mental Health Act 1983 (2007 amendment) Section 7 also defines medicinal treatment to include “nursing, psychological intervention and specialist mental health habilitation, rehabilitation and care.” If businesses or government departments are using apps to assess and analyse for the best medical treatment or action, this constitutes as an intervention by a non-legal entity.

Join our growing network of employers
Receive Make A Difference News straight to your inbox

Further, this can be seen as unlawful discrimination through the Equality Act 2010. An employee needs to be able to have access to a “competent person” instead of only receiving restricted access to a non-competent Ai or algorithmic app with no legal status.

These expectations have been in place since 1772—when the legal requirement of the personhood of human beings was set by Lord Mansfield. All the laws aforementioned define the term of “competent person” as the minimum legal standard expected.

Businesses need to realise that “tick box” exercises might deliver a cost-saving but could result in them being open to legal claims. If AI is the future norm in the workplace, there needs to be more regulation and accountability in place before it is realised.

—-

Ian Houston is an operations professional at Disruptive Management Occupational Health—an Occupational Health company specialising in complex case management and organisational well-being structuring for companies (special interest in Covid-19 management), with over 20 years’ experience at C-suite level operations on various continents. He graduated from the University of Chester Business School with a BA (Hons) in International Business Management.

LATEST Poll

Sponsored by The Watercooler

FEATURED
Logo

Sign up to receive Make A Difference's fortnightly round up of features, news, reports, case studies, practical tools and more for employers who want to make a difference to work culture, mental health and wellbeing.