News

Trust and transparency key to National AI Strategy

Image: 154440557 © Alexandersikov | Dreamstime.com

Trust and transparency are essential parts of the government’s new National AI Strategy.

The government said it is aiming to build the “most trusted and pro-innovation system for AI governance in the world”.

It will establish a governance framework that addresses the challenges and opportunities of AI, while being “flexible, proportionate and without creating unnecessary burdens”.

The governance framework will:

  • publish the Centre for Data Ethics and Innovation assurance roadmap and use this to continue work to develop a mature AI assurance ecosystem in the UK;
  • pilot an AI standards hub to coordinate UK engagement in AI standardisation globally;
  • develop a cross-government standard for algorithmic transparency;
  • work with The Alan Turing Institute to update the guidance on AI ethics and safety in the public sector;
  • coordinate cross-government processes to accurately assess long-term AI safety and risks, including the evaluation of technical expertise in government; and
  • work with national security, defence, and leading researchers to understand how to anticipate and prevent catastrophic risks.

“We must define the human-computer interface more clearly. What decisions will be safe and socially acceptable for a machine to make and what will require human input?”

Construction Innovation Hub Digital Innovation Imperative report

The announcement follows the Construction Innovation Hub’s (CIH) comments on AI, algorithms and the human-computer interface in its recent Digital Innovation Imperative report.

The CIH report stated: “We must define the human-computer interface more clearly. What decisions will be safe and socially acceptable for a machine to make and what will require human input? Much more work remains to be done around the extent to which people will be able to interpret data, around transparent decision processes, and the creation of clear guidelines and processes for liability and accountability. We must ensure clear advocacy on behalf of people, living things and natural processes that are unable to speak for themselves.”

As an example of such a scenario going wrong, the report highlighted the algorithm that was used by Ofqual in 2020 to computer-generate scoring of GCSE and A-Level results in the wake of covid-19. The algorithm was revoked after it was discovered to reinforce existing inequalities in the UK’s education system, thereby assigning lower scores to disadvantaged pupils from schools with historically lower success rates.

Alongside the National AI Strategy, the AI Council and Alan Turing Institute have published the results of their survey into the AI ecosystem. More than three quarters of the 400-plus respondents (77%) agreed that increased regulation of AI was a priority to improve and maintain public trust in its development and use.

Nearly half of the respondents (43%) were from industry, a third from academia and nearly a sixth from the public sector. Nearly two-thirds were directors or managers.

Only around one in five respondents (19%) thought that business currently had the necessary skills and knowledge to understand where value could be gained from using AI, and only 18% agreed there was sufficient provision of training and development in AI skills available to the current UK workforce.

A total of 81% of respondents agreed there were significant barriers in recruiting and retaining top AI talent in their sector/domain within the UK.

Story for BIM+? Get in touch via email: [email protected]

Latest articles in News