MAKE A DIFFERENCE | workplace culture / mental health / wellbeing

APPG Calls For ‘Accountability for Algorithms Act’ For Workplaces

AI and surveillance has increased in use in the workplace.
Top view of modern digital tablet screen with creative human brain microcircuit. Future technology and AI concept.

As part of an inquiry into artificial intelligence (AI) and surveillance in the workplace, the All-Party Parliamentary Group (APPG) for the Future of Work has found that there has been a “significant increase in use of surveillance and other AI technologies” in the workplace.

The report, The New Frontier: Artificial Intelligence at Work, which was released on 11 November 2021, is the last report from the group. It shows that AI is transforming work and working lives across the country in ways that have outpaced the existing regimes for regulation.

The APPG has also found that the practice, tools, and ethos of the gig economy are being embedded across essential sectors “without due regard for adverse impacts on work and people.” It also notes that there are “marked gaps in legal protection at an individual, collective and corporate levels.”

What Are The Reports Recommendations On AI In The Workplace?

Recommendations from the group include introducing a new, cross-sector, principle-driven regulatory framework to promote strong governance and innovation. They have called it the Accountability for Algorithms Act (AAA).

The act would shift emphasis to preventative action and governance in the public interest, says the report. “It would include new rights and responsibilities to ensure that all significant impacts from algorithmic decision-making on work or workers are considered and that appropriate action is always taken,” the report continues. “This approach would benefit the best of British innovators and British business as well as working people across the country.”

The five recommendations from the report are:

  1. An Accountability for Algorithms Act: The Act would establish a simple, new corporate and public sector duty to undertake, disclose and act on pre-emptive Algorithmic Impact Assessments (AIA).
  2. Updating digital protection: The AAA would raise the floor of essential protection for workers in response to specific gaps in protection from adverse impacts of powerful but invisible algorithmic systems.
  3. Enabling a partnership approach: To boost a partnership approach and recognise the collective dimension of data processing, some additional collective rights are needed for unions and specialist third sector organisations to exercise new duties on members or other groups’ behalf.
  4. Enforcement in practice: The joint Digital Regulation Cooperation Forum (DRCF) should be expanded with new powers to create certification schemes, suspend use or impose terms and issue cross-cutting statutory guidance. This will supplement the work of individual regulators and sector-specific standards.
  5. Supporting human-centred AI: The principles of good work should be recognised as fundamental values, incorporating fundamental rights and freedoms under national and international law. This will guide the development and application of a human-centred AI strategy.

In September 2021, the National AI Strategy was released. In early 2022, this will be followed with a white paper outlining the Government’s position on regulation, says the report.

If you enjoyed reading this, then check out Health our most precious asset—and how technology can help nurture it, Is calling your workplace a family toxic?, and Manufacturing workers distrust bosses due to health and safety concerns.