Skip to content

Regulation of artificial intelligence in nuclear

March 2022

As part of our work to support the nuclear industry in embracing innovation, we ran an expert panel meeting on the use of artificial intelligence (AI) in the nuclear sector along with the Advanced Nuclear Skills and Innovation Campus (ANSIC). The aim of the expert panel was to establish a roadmap for effective and enabling regulation of AI in the nuclear sector.

The panel discussed the opportunities and challenges in the application of AI and recognised that they are likely to be the same across other industrial sectors.

The panel identified three broad opportunities for the deployment of AI:

  • Advisory applications (e.g. inspections, modelling to assist design)
  • Supervisory applications (e.g. analysis of data and operational efficiency/optimisation)
  • Control applications (e.g. automation)

Themes associated with the use and regulation of AI

Development of AI systems

The principles of good software development processes and maintaining configuration control apply to AI system development. The panel recognised the importance of good systems of work and data (success and failure data) to train AI systems and the need for robust cyber security arrangements.

Proving AI systems

Standards are being developed but they tend to lag behind the technology. There is a need to understand ways of proving the performance of AI systems and opportunities to transfer information from other sectors into nuclear.

Confidence in the performance of AI systems

Testing of AI systems can only demonstrate so much, due to the complexity of the systems and challenges of undertaking a meaningful and complete set of tests. A phased approach to deployment is important to build confidence and experience. Understanding the consequences of maloperation and identifying effective controls can be increased by:

  • Using AI in the background of existing systems,
  • Using trials to test the application in non-safety critical environments, and
  • Having a phased, risk informed, process to move from advisory to supervisory use.

Dealing with the failure of AI systems

Defining and recognising failure of AI systems is crucial. There is an important trade-off between constraining the application of AI to such an extent that its usefulness is limited and recognising when failure can be tolerated and when it cannot. Concepts already used in nuclear are applicable to AI, such as:

  • Hierarchy of control;
  • Defence in depth and the use of an independent (non-AI) system; and
  • Use of diverse AI systems.

The sector needs access to or ways to develop skills and experience on the application of AI in nuclear.

We asked the panel to identify potential applications of AI that could be challenging to regulate. These will then be put into the Regulatory Sandbox we are developing. Sandboxing enables innovators to test and trial new solutions in a safe environment without the pressures of the usual rules applying. This will allow the industry and regulators to develop appropriate confidence in AI, understand ways of regulating AI and gather industry views on what an appropriate safety case could look like.