Note: Some AI systems will be deployed in contexts that imply risks to physical, mental and/or financial health of individuals, communities or even the whole of society.
The paper recommends that such systems should be subject to mandatory and independent audit and certification before they are deployed.
Note: Training of state-of-the-art models consumes vast amounts of computaitonal power, limiting their deployment to only the best-resourced actors.
To prevent reckless training of high risk models the paper recommends that governments control access to large amounts of specialised compute resource subject to a risk assessment, with an extension of "know your customer" legislation.
US pending legislation (ref Ted Lieu) to create a non-partisan AI Commission tasked with establishing a regulatory agency
Recommends Korinek's blueprint for an AI regulatory agency:
Monitor public developments in AI progress
Mandate impact assessments of AI systems on various stakeholders
Establish enforcement authority to act upon risks identified in impact assessments
Publish generalized lessons from the impact assessments
Note: AI systems can perform in ways that may be unforeseen, even by their developers, and this risk is expected to grow as different AI systems become interconnected.
There is currently no clear legal framework in any jurisdiction to assign liability for harm caused by such systems.
The paper recommends the development of a framework for assigning liability for AI-derived harms, and asserts that this will incentivise profit-driven AI developers to use caution.
Note: The authors see unauthorised leakage of AI Models as a risk not just to the commercial developers but also for unauthorised use. They recommend government-mandated watermarking for AI models.
Note: Private sector investment in AI research under-emphasises safety and security.
Most public investment to date has been very narrow, and the paper recommends a significant increase in public funding for technical AI safety research:
Alignment of system performance with intended outcomes
Note: A coherent society requires a shared understanding of what is fact. AI models are capable of generating plausible-sounding but entirely wrong content.
It is essential that the public can clearly distinguish content by human creators from synthetic content.
Policy should therefore focus on:
funding for development of ways to clearly mark digital content provenance
laws to force disclosure of interactions with a chatbot
laws to require AI to be deployed in ways that are in the best interest of the user
laws that require 'duty of care' when AI deployed in circumstances where a human actor would have a fiduciary responsiblity