My Digital Garden

Policymaking in the Pause

Policymaking in the Pause

Source

Policymaking in the Pause

What can policymakers do now to combat risks from advanced AI systems?

Overview

This set of policy recommendations was published by the Future of Life Institute in March 2023, co-timed wit hthe release of an open letter calling for a pause in the creation of hyper-scale GPT systems.

We have called on AI labs to institute a development pause until they have protocols in place to ensure that their systems are safe beyond a reasonable doubt, for individuals, communities, and society. Regardless of whether the labs will heed our call, this policy brief provides policymakers with concrete recommendations for how governments can manage AI risks.

Summary

Paper recommends regulatory change across seven areas.

Mandate robust third-party auditing and certification for specific AI systems

Some AI systems will be deployed in contexts that imply risks to physical, mental and/or financial health of individuals, communities or even the whole of society.

The paper recommends that such systems should be subject to mandatory and independent audit and certification before they are deployed.

Regulate organizations’ access to computational power

Training of state-of-the-art models consumes vast amounts of computaitonal power, limiting their deployment to only the best-resourced actors.

To prevent reckless training of high risk models the paper recommends that governments control access to large amounts of specialised compute resource subject to a risk assessment, with an extension of "know your customer" legislation.

Establish capable AI agencies at national level

Article notes:

  • UK Office for Artificial Intelligence
  • EU legislation in progress for an AI Board
  • US pending legislation (ref Ted Lieu) to create a non-partisan AI Commission tasked with establishing a regulatory agency

Recommends Korinek's blueprint for an AI regulatory agency:

  • Monitor public developments in AI progress
  • Mandate impact assessments of AI systems on various stakeholders
  • Establish enforcement authority to act upon risks identified in impact assessments
  • Publish generalized lessons from the impact assessments

Establish liability for AI-caused harm

AI systems can perform in ways that may be unforeseen, even by their developers, and this risk is expected to grow as different AI systems become interconnected.

There is currently no clear legal framework in any jurisdiction to assign liability for harm caused by such systems.

The paper recommends the development of a framework for assigning liability for AI-derived harms, and asserts that this will incentivise profit-driven AI developers to use caution.

Introduce measures to prevent and track AI model leaks

The authors see unauthorised leakage of AI Models as a risk not just to the commercial developers but also for unauthorised use. They recommend government-mandated watermarking for AI models.

Expand technical AI safety research funding

Private sector investment in AI research under-emphasises safety and security.

Most public investment to date has been very narrow, and the paper recommends a significant increase in public funding for technical AI safety research:

  • Alignment of system performance with intended outcomes
  • Robustness and assurance
  • Explainability of results

Develop standards for identifying and managing AI-generated content and recommendations

A coherent society requires a shared understanding of what is fact. AI models are capable of generating plausible-sounding but entirely wrong content.

It is essential that the public can clearly distinguish content by human creators from synthetic content.

Policy should therefore focus on:

  • funding for development of ways to clearly mark digital content provenance
  • laws to force disclosure of interactions with a chatbot
  • laws to require AI to be deployed in ways that are in the best interest of the user
  • laws that require 'duty of care' when AI deployed in circumstances where a human actor would have a fiduciary responsiblity

See also